Skip to content

LLMs are now being trained on custom data

Updated: at 04:42 AM

Allen Pike:

A recent example of the rise of custom data is Microsoft’s Phi-3 Technical Report, published in April. phi-3-mini is only 3.8 billion parameters — a waif in LLM terms — but claims performance competitive with the impressive but much-heavier Mixtral model. The paper credits some of this improvement to including high-quality synthetic data, generated by larger LLMs, in the training data. Synthetic data allows them to fill gaps in the internet-sourced data, and improves model performance for a given size.