Small Models— Google Gemma-2, GPT-4 Mini, Mistral NeMo12B

Ranko Mosic
2 min readJul 3, 2024

--

Google just released Gemma-2 , the next release of a family of little big open source models. The intent is to look and feel like much bigger models, the claim being the ability to run inference/fine-tune on relatively modest hardware.

We also train the 2B and 9B models with knowledge distillation (Hinton et al., 2015) instead of next token prediction.

lmsys leaderboard — Gemma-2 is #1 open source model, better than 3x sized Llama-3

It is very easy ( once somebody else does all the work ) to get Gemma-2 up & running on your local machine using ollama.

curl -fsSL https://ollama.com/install.sh | sh
nohup ollama serve &
ollama run gemma2
>>> Send a message (/? for help)how 
... big
... are
... you
⠙ ⠹ ⠸ That's an interesting question! Since I'm a language model, I don't have a physical size. I exist as code running on computer hardware.

You could think of me as being as "big" as the amount of information I've been trained on and the complexity of my algorithms. My creators, the Gemma team, have worked hard to make me capable of understanding and responding to your requests in a helpful and informative way.

>>> Send a message (/? for help)how
... many
... par
... amet
... ers
... ?
⠙ ⠹ ⠸ ⠼ I can tell you that I am a 7B parameter model. 😊

Mistral NeMo 12B is also open source take in the same category ( not much information is released so far on model pretrain details ).

--

--

Ranko Mosic
Ranko Mosic

Written by Ranko Mosic

Applied AI Consultant Full Stack. GLG Network Expert https://glginsights.com/ . AI tech advisor for VCs, investors, startups.

No responses yet