AI: What is it Good For? | Heisener Electronics
Contact Us
SalesDept@heisener.com 86-755-83210559-827
Language Translation

* Please refer to the English Version as our Official Version.

AI: What is it Good For?

Technology Cover
Post Date: 2024-04-09, Analog Devices Inc.

Unless someone has been hiding under the proverbial rock before the pandemic, everyone has at least heard of artificial intelligence. In the past 18 months, since ChatGPT's launch in late 2022, AI has become a talking point, not only from Main Street to Wall Street, but also from Capitol Hill to the ski slopes at the World Economic Forum's annual meeting in Davos. Even if the nature of these conversations is different, and the level of expertise of the people discussing AI is different, they all have one thing in common - they are all trying to understand AI, its implications, and its implications.

There seems to be an understanding - or perhaps a hope - that if AI is at least mentioned alongside something else, then something else will immediately get more attention. While this may be the case in 2023, it is no longer the case. What doesn't seem to be well understood is that there are different kinds of AI, some of which have been around much longer than ChatGPT.

In addition, these different types of AI have different meanings in terms of supporting hardware and software and use cases. As we gain a deeper understanding of these nuances, we get more sophisticated and realize that just mentioning "AI" is no longer enough. The conversation must address the problem being solved, how AI can be used to solve it, and for whom.

Traditional vs. generative AI

Before delving into the maturing nature of the AI ecosystem and the solutions that are starting to be brought to bear,  it is worth taking a small step back and level setting on two of the primary types of AI:  traditional AI and generative AI. Given that most people know AI primarily through the hype generated by ChatGPT,  their understanding of AI revolves around what is better described as “generative AI”. There is a lesser known—but more  prevalent—form of AI now often referred to as “traditional AI.”

The primary characteristic that defines generative AI versus traditional AI is a model’s ability to create novel content  based on prompted inputs for the former,  as opposed to a known outcome based on specific inputs for the latter. While both types of AI are predictive in nature,  generative AI creates new patterns of data or tokens given the most likely occurrence based on the data on which it was  trained. Traditional AI, on the other hand,  recognizes existing patterns and acts upon them based on pre-determined rules and actions.

Essentially, while the latter is all about pattern recognition,  the former is about pattern creation. A simple example was demonstrated by Jensen Huang at GTC 2024:  traditional AI started to take off with the Alex Net neural network model in 2012. It could process a picture of a cat  and then identify that the picture was of a cat. With generative AI,  you input a text prompt “cat” and the neural net will generate a picture of a cat.

Another point of differentiation is the amount of resources required for both training and inference of each type of AI.  On the training side,  given the size of the models and the amount of data required to adequately train generative AI models,  typically a data center’s worth of CPUs and GPUs in the tens of thousands are required. In contrast,  typical traditional AI training might require a single server’s worth of high-end CPUs and maybe a handful of GPUs.

Similarly for inferencing, generative AI might utilize the same data center scale of processing resources or, at best,  when optimized for edge applications, a hetero produce compute architecture which typically consists of CPUs, GPUs,  neural processing units (NPUs) and other accelerators providing multiple tens of TOPS. For these edge applications  running on-device where generative AI models are in the range of 7 billion parameters or parameters, this is estimated  to be at least about 30-40 TOPS just for the NPU. On the other hand,  traditional AI inferencing can typically be performed with microcontroller-level resources or, at worst,  a microcontroller with a small AI accelerator.

Granted, the scale of these resource requirements for the different types of AI are all dependent on model sizes,  the amount of data required to adequately train the models and how quickly the training or inferencing needs to be  performed. For example,  there are some traditional AI models like those used for genome sequencing that require significant amounts of resources  and might rival generative AI requirements. However, in general and for the most widely used models,  these resource comparisons are valid and applicable.

What is it good for? The potential for everything.

As the ecosystem of AI solutions continues to mature, it is becoming increasingly clear that mere mention of AI is no longer enough. A more mature presentation of strategy, positioning and solutions is needed to establish a real proposition to participate as a legitimate competitor. Potential customers have already seen images of puppies on the beach eating ice cream on display. That's great. But what they're asking now is, "How does it actually provide value by helping me personally or solving my business challenges?"

The great thing about the AI ecosystem is that it's an ecosystem of many different companies that are all trying to answer these questions. Qualcomm and IBM are two companies worth noting at this year's Mobile World Congress (MWC), considering how they are using both types of AI and applying it to consumer/production consumers of the former and enterprises dedicated to the latter.

Moreover, not only do they have their own solutions, but they all have development environments to help developers create AI-based applications that are critical to the developer ecosystem and allow them to do what they do best. Much like the app stores and software development kits needed at the beginning of the smartphone era, these development environments will allow an ecosystem of developers to innovate and create AI-based apps for use cases not yet thought of.

To help answer the question, "What are the benefits of AI?" At the show, Qualcomm demonstrated a number of applications that bring artificial intelligence into the real world. In terms of traditional AI, their latest Snapdragon X80 5G modem - rf platform uses AI to dynamically optimize 5G. It does this by providing context awareness to the modem's AI, such as the application or workload the user is using, and the current RF environment in which the device is running.

With this awareness, AI makes real-time decisions on key optimization factors such as transmit power, antenna configuration, and modulation schemes to dynamically optimize 5G connectivity and deliver the best performance required by the application at the lowest power, as well as the performance allowed by the RF environment.

When it comes to generative AI, Qualcomm's solutions highlight how generative AI is enabling new types of AI smartphones and future AI personal computers. Given the volume of user-generated images and videos created using smartphones, many solutions centered around image and video processing, as well as privacy and personalization, can be implemented by running generative AI models on the device. In addition, they show how multimodal generation of AI models can facilitate a more natural way of interacting with these models, allowing prompts to include not only text, but also voice, audio, and image input.

For example, you can submit images of ingredients and prompt for recipes that include those ingredients. The multimodal model will then take a text or verbal cue and identify the ingredients in the picture to output the recipe using those ingredients.

The first of these solutions is coming to market through first-party applications developed by smartphone Oems themselves. This makes sense because Oems have been able to work with chipset vendors (in this case Qualcomm) to best utilize available resources like Npus and optimize the performance and power consumption of these AI-based generation applications. These first-party apps will act as appetizers, whetting the appetite of smartphone users and helping them understand what the AI generated on the device can do. Ultimately, TIRIAS Research believes this will lead to the next wave of adoption driven by developers of AI-based applications generated by third parties.

This is where Qualcomm announced that their artificial Intelligence Center will help. The AI Hub is designed to allow developers to take full advantage of Qualcomm's heterogeneous computing architecture in its Snapdragon chipset, which consists of a cpu, gpu, and npu. One of the trickiest aspects of developing third-party applications that leverage generative AI models is how best to optimize workloads so that they run on the best processing resources to optimize performance and power consumption. The AI Hub allows developers to see how applications perform when running on cpus, Gpus, and Npus, and optimize from there. In addition, developers can run their apps on real devices, using what Qualcomm calls "device farms" on the cloud. What's the best part for developers? According to Qualcomm, they can do it all for free.

Qualcomm is focusing on end-use devices used by consumers and producers, while IBM is highlighting solutions for enterprises looking to leverage AI through the watsonx platform. At MWC, one of the many applications they highlighted was their Watson.

The next step in artificial intelligence

We're just entering the age of artificial intelligence, and it's still early days. 2023 is the year that captures everyone's imagination of AI, while 2024 will be about value creation and continued evolution. This year will show us what AI can do and prompt us to ask: "If it can do this, if it can do this... Isn't that great?"

If previous technological breakthroughs are any indication, once the global economy starts asking this question, the door will open to brave new worlds where AI will be used in ways we can't even imagine.

About US

Heisener Electronic is a famous international One Stop Purchasing Service Provider of Electronic Components. Based on   the concept of Customer-orientation and Innovation, a good process control system, professional management team,   advanced inventory management technology,   we can provide one-stop electronic component supporting services that Heisener is the preferred partner for all the   enterprises and research institutions.

  

Related Products