NVIDIA (CEO Jensen Hwang) announced on the 29th an update to the NeMo Megatron framework that provides up to 30% faster training as the size and complexity of large language models continue to surge.

This update includes two pioneering techniques and a hyperparameter tool that optimizes and extends LLM training on multiple GPUs. Through this, it provides new features to train and build models with NVIDIA AI platforms.

BLOOM, the world’s largest open-science, open-access multilingual model with 176 billion parameters, was recently trained on the NVIDIA AI platform to enable text generation in 46 languages and 13 programming languages.

In addition, the Nvidia AI platform supports the Megatron-Turing NLG model (MT-NLG), the most powerful transducer language model containing 530 billion parameters.

LLM is one of the most important state-of-the-art technologies today that includes up to trillions of parameters learned from text. However, developing this requires in-depth technology expertise, distributed infrastructure, and a full stack approach, which is expensive and time consuming.

However, it has great advantages in developing questions and answers for real-time content generation, text summarization, customer service chatbots, and interactive AI interfaces.

Leave a Reply