Dell’s Project Helix heralds a shift toward specially trained generative AI

A woman using a laptop is talking to an intelligent artificial intelligence
Image: Supatman/Adobe Stock

Generative artificial intelligence is at a crucial moment. Companies want to know how to take advantage of massive amounts of data while keeping their budgets within today’s economic demands. Generative AI chatbots are relatively easy to deploy, but they sometimes return false “hallucinations” or reveal personal information. The best of both worlds can come from more specialized conversational AI trained securely on an organization’s data.

Dell Technologies World 2023 brought that theme to Las Vegas this week. On the first day of the conference, CEO Michael Dell and other leaders delved into what AI can do for businesses beyond ChatGPT.

“Companies will be able to train much simpler AI models on specific, sensitive data more cheaply and securely, which will be a breakthrough in productivity and efficiency,” said Michael Dell.

Dell’s new Project Helix is ​​a comprehensive service that helps organizations operate generative artificial intelligence. Project Helix will first be available as a public product in June 2023.


Unique vocabulary offering for purpose-built use cases

Companies are racing to deploy generative AI in domain-specific use cases, said Varun Chhabra, senior vice president of product marketing, infrastructure solutions group and telecom at Dell Technologies. Dell’s solution, Project Helix, is a full-stack, on-premises offering where companies train and manage their own proprietary artificial intelligence.

For example, a company could deploy a large language model to read all the knowledge articles on its site and answer users’ questions based on the article summaries, said Forrester analyst Rowan Curran.

The AI ​​”would not attempt to answer the question from knowledge in the ‘model’ (ChatGPT answers from ‘inside’ the model),” Curran wrote in an email to TechRepublic.

It wouldn’t draw from the entire internet. Instead, artificial intelligence would draw from the proprietary content of knowledge articles. This would allow you to address the needs of a particular company and its customers more directly.

“Dell’s strategy here is really a hardware, software and services strategy that allows businesses to build models more efficiently,” said Brent Ellis, senior analyst at Forrester. “Providing a simplified, validated platform for model creation and training will be a growth market in the future as businesses look to create AI models that focus on the specific problems they need to solve.”

See also  Virtually 2,000 information breaches reported for the primary half of 2022

However, there are pitfalls that businesses run into when trying to apply artificial intelligence to their specific business needs.

“It’s not surprising that there are a lot of special needs,” Chhabra said at the Dell conference. “You have to trust things like results. This is very different from a general purpose model that anyone can access. There can be all kinds of answers to watch out for or questions to watch out for.”

Hallucinations and incorrect statements may be common. For use cases involving proprietary information or anonymized customer behavior, privacy and security are paramount.

Enterprise customers can also choose custom, on-premises AI for privacy and security considerations, said Kari Ann Briski, vice president of artificial intelligence software product management at NVIDIA.

Also, the computation cycle and inference costs are generally higher in the cloud.

“Once you have that training model and you’ve customized and conditioned it to your brand voice and data, running unoptimized inference to save compute cycles is another area that concerns a lot of customers,” Briski said.

Different businesses have different needs, from generative AI, from those using open source models, to businesses that can build models from scratch or want to figure out how to run a model in production. People are asking, “What’s the right mix of training infrastructure and inference infrastructure, and how can it be optimized? How do you operate the production?” Briski asked.

Dell describes Project Helix as enabling safe, secure, personalized generative AI regardless of how the prospect answers these questions.

“As we move forward in this technology, we see more and more work being done to make models as small and efficient as possible while achieving performance levels similar to larger models, and this is done by fine-tuning and distilling towards specific goals. tasks,” Curran said.

SEE: Dell expanded its APEX software-as-a-service family this year.

Changing DevOps – one bot at a time

Where does field AI like this fit within operations? Anywhere from code generation to unit testing, Ellis said. Focused AI models are particularly good at this. Some developers can use AI TuringBots to do everything from design to code deployment.

See also  Create & populate a table in Microsoft Excel’s Power Query

At NVIDIA, development teams used the term LLMOps instead of machine learning operations, Briski said.

“You don’t code for it; he asks human questions, he said.

Reinforcement of learning through human feedback from subject matter experts helps the AI ​​understand whether it is responding correctly to prompts. This is part of how NVIDIA uses its NeMo framework, a tool for building and deploying generative AI.

“The way developers are now dealing with this model is going to be completely different in terms of maintenance and upgrade methods,” Briski said.

Behind the scenes with NVIDIA hardware

The hardware behind Project Helix includes H100 Tensor GPUs and NVIDIA networking, as well as Dell servers. Briski pointed out that form follows function.

“For every generation of our new hardware architecture, our software has to be ready on day one,” he said. “We also think about the most important workloads before we even stick the chip.

” … For example, in the case of the H100, this is the Transformer engine. NVIDIA Transformers are a very important load for us and the world, which is why we put the Transformer engine in the H100.”

Dell and NVIDIA jointly developed the PowerEdgeXE9680 and the rest of the PowerEdge server family specifically for complex, emerging artificial intelligence and high-performance computing workloads, and had to make sure it could perform at scale and handle high-bandwidth processing, said Varun. .

NVIDIA has come a long way since the company trained a vision-based AI on Volta GPUs in 2017, Briski pointed out. NVIDIA currently uses hundreds of nodes and thousands of GPUs to run its data center infrastructure systems.

NVIDIA also uses large language models during hardware design.

“One of the things (NVIDIA CEO) Jensen (Huang) said six or seven years ago, when deep learning came out, the challenge for NVIDIA was that every team had to embrace deep learning,” Briski said. “It does exactly the same thing for large language models. The semiconducting team uses large language models; our marketing team uses large language models; we have the API for internal access.”

See also  This Overstock deal on cloud storage is too good to pass up

This is related to the concept of a security and privacy guardrail. An NVIDIA employee can ask the human resources AI if they can receive HR benefits for, say, adopting a child, but not whether other employees have adopted children.

Should your business use custom generative AI?

If your business is considering whether to use generative AI, consider whether you have the need and capacity to change or optimize AI on a large scale. You should also consider your security needs. Briski cautions against using public LLM models, which are black boxes when it comes to figuring out where they get their data.

It is particularly important to prove that the data set incorporated in the basic model can be used commercially.

According to Ellis, in addition to Dell’s Project Helix, Microsoft’s Copilot projects and IBM’s watsonx tools show the wide range of possibilities available when it comes to purpose-built AI models. HuggingFace, Google, Meta AI, and Databricks offer open source LLMs, while Amazon, Anthropic, Cohere, and OpenAI offer AI services. Facebook and OpenAI will likely one day offer their own on-premises capabilities, and many other vendors will be lining up to try to join this bustling field.

“General models are exposed to larger data sets and are able to make connections that more limited data sets cannot access in targeted models,” Ellis said. “However, as we see in the market, generic models can make wrong predictions and ‘hallucinate’.

“Targeted models help limit this hallucination, but even more important is the tuning that happens after the model is created.”

Overall, it depends on what an organization wants to use an AI model for as to whether they should use a general-purpose model or train their own.

Disclaimer: Dell paid for my airfare, lodging, and meals to Dell Technologies World May 22-25 in Las Vegas.