LLMs need aligning with human expectations for safety and utility, proposing decoupling LLMs and alignment using aligner models trained on synthetic data.