Amharic LLaMA and LLaVA aim to enhance language models for low resource languages like Amharic through data augmentation and multimodal capabilities.
The author explores training a multimodal language model, LLaMA-2, to understand Amharic by augmenting data through translation and incorporating visual instruction tuning. This work aims to address the challenges faced by low-resource languages in leveraging large language models effectively.