A novel two-stage fine-tuning process that minimizes hallucinations and promotes creative and compound sentence generation for financial report writing using large language models.
A novel framework called Quality-Guided Contrastive Rationale Distillation (QCRD) that enhances the reasoning capabilities of smaller language models by effectively distilling both positive and negative knowledge from large language models through contrastive learning.
Large Language Models can be effectively utilized as explainers and evaluators to enhance the performance of small models on the Chinese Grammatical Error Correction task.
The proposed Cross-Utterance Conditioned Variational Autoencoder (CUC-VAE) framework leverages contextual information from surrounding utterances to generate more natural and expressive speech by modeling prosody.
AudioComposer is a novel text-to-audio generation framework that utilizes natural language descriptions to provide precise control over content and style, without requiring additional conditions or complex network structures.
CLAIRA is a simple and flexible method that leverages the zero-shot capabilities of large language models (LLMs) to evaluate candidate audio captions by directly asking LLMs for a semantic distance score, providing an interpretable justification for the score.
Multilingual Reverse Instructions (MURI) is a novel method for generating high-quality instruction tuning datasets for low-resource languages without requiring human annotators, task-annotated data, or pre-trained multilingual models.
A novel pipeline-based data augmentation method that leverages large language models to synthesize domain-specific datasets, enhancing fine-grained sentence representation learning.
Supervised Fine-Tuning (SFT) alone is sufficient for text-based Classification tasks, while Direct Parameter Optimization (DPO) improves performance for more complex clinical NLP tasks like Triage, Clinical Reasoning, and Summarization.
Integrating token-level and sentence-level objectives in cross-lingual sentence encoders significantly improves the quality of sentence representations across various tasks.