By leveraging entropy as a new metric to measure the diversity among clients' local model parameters, the proposed FedEnt algorithm adaptively adjusts the learning rate for each client to achieve fast convergence of the global model under non-IID data distribution.
Federated learning can be improved by allowing each client to use its own adaptive step size that adjusts to the local smoothness of the client's objective function, leading to faster convergence without additional tuning.