

Meta Platforms Inc. is rolling out a new version of its Meta AI chatbot that can customize its responses based on information provided by the user.
The company announced the update in a blog post published this morning.
Meta AI is an artificial intelligence assistant that rolled out for Facebook, Messenger and Instagram in 2023. It has a similar feature set as OpenAI’s ChatGPT. Meta AI can search the web for information, translate text, generate images and help the user with programming tasks.
Today’s update adds two new personalization features to the service.
The first enhancement allows Meta AI to customize prompt responses based on information that consumers share with it in WhatsApp and Messenger chat sessions. For example, if a user requests a list of data visualization programs that run on Windows, the chatbot can deduce the user has a Windows machine. The next time the user asks for software advice, Meta AI won’t recommend programs that only work on macOS.
Initially, the capability is available in the U.S. and Canada across the mobile versions of Facebook, Messenger and WhatsApp.
The second new feature that debuted today allows Meta AI to take the user’s personal information into account when generating responses. The chatbot could, for example, customize its response to the prompt “find upcoming concerts nearby” based on the user’s location. Meta can also take into account other data points such as the genre of music videos that the user watched in the past week.
The feature is available on Facebook, Messenger and Instagram.
Under the hood, Meta AI is powered by the Llama 3.2 family of large language models that the company open-sourced in September. The LLM series is headlined by two multimodal models with 10 billion and 90 billion parameters. They can not only process text but also analyze images uploaded by the user.
The two models are based on an upgraded version of Meta’s previous-generation Llama 3.1 LLM series, which can only process text. To create the multimodal Llama 3.2, the company modified the original text-only design by adding an “adapter” module that allows it to process images. This module comprises several interconnected collections, or layers, of artificial neurons.
LLMs turn the data they generate into mathematical structures called embeddings. Some embeddings are geared toward storing text while others are optimized to hold images. Llama 3.2’s adapter module removes the technical inconsistencies between the two embedding varieties, which allows the model series to process images even though it’s based on the text-only Llama 3.1 series.
Last week, Meta revealed plans to release a new iteration of its LLM series called Llama 4 later this year. It’s plausible Meta AI will be upgraded to the upcoming model with a future update. Moreover, the company reportedly plans to equip the chatbot with an internally powered search engine for browsing the web.
THANK YOU