{"id":33841,"date":"2024-03-28T20:01:20","date_gmt":"2024-03-28T14:31:20","guid":{"rendered":"https:\/\/farratanews.online\/new-ai-test-measures-how-fast-robots-can-respond-to-user-commands\/"},"modified":"2024-03-28T20:01:20","modified_gmt":"2024-03-28T14:31:20","slug":"new-ai-test-measures-how-fast-robots-can-respond-to-user-commands","status":"publish","type":"post","link":"https:\/\/farratanews.online\/new-ai-test-measures-how-fast-robots-can-respond-to-user-commands\/","title":{"rendered":"New AI test measures how fast robots can respond to user commands"},"content":{"rendered":"
[ad_1]\n<\/p>\n
\n
\n
An artificial intelligence group has released a new set of results assessing the speed of hardware in running AI applications.<\/strong><\/li>\n
Two new benchmarks measure the speed of AI chips and systems in generating responses from data-packed AI models.<\/strong><\/li>\n
One new benchmark also evaluates the speed of question-and-answer scenarios for large language models.<\/strong><\/li>\n<\/ul>\n
Artificial intelligence benchmarking group MLCommons on Wednesday released a fresh set of tests and results that rate the speed at which top-of-the-line hardware can run AI applications and respond to users.<\/p>\n
The two new benchmarks added by MLCommons measure the speed at which the AI chips and systems can generate responses from the powerful AI models packed with data. The results roughly demonstrate to how quickly an AI application such as ChatGPT can deliver a response to a user query.<\/p>\n
One of the new benchmarks added the capability to measure the speediness of a question-and-answer scenario for large language models. Called Llama 2, it includes 70 billion parameters and was developed by Meta Platforms.<\/p>\n
WHITE HOUSE UNVEILS NEW AI REGULATIONS FOR FEDERAL AGENCIES<\/strong><\/p>\n
MLCommons officials also added a second text-to-image generator to the suite of benchmarking tools, called MLPerf, based on Stability AI’s Stable Diffusion XL model.<\/p>\n