Federated bilingual Word Prediction on phones
We explore a bilingual next-word predictor (NWP) under federated optimization for a mobile application. A character-based LSTM is server-trained on English and Dutch texts from a custom parallel corpora. This is used as the target performance. We simulate a federated learning environment to assess the feasibility of distributed training for the same model. The popular Federated Averaging (FedAvg) algorithm is used as the aggregation method. We show that the federated LSTM achieves decent performance, yet it is still sub-optimal. We suggest possible next steps to bridge this performance gap. Furthermore, we explore the effects of language imbalance varying the ratio of English and Dutch training texts (or clients). We show the model upholds performance (of the balanced case) up and until a 80/20 imbalance before decaying rapidly. Lastly, we describe the implementation of local client training, word prediction and client-server communication in a custom virtual keyboard for Android platforms. Additionally, homomorphic encryption is applied to provide with secure aggregation guarding the user from malicious servers.