{"id":26967,"date":"2024-02-14T23:08:31","date_gmt":"2024-02-14T17:38:31","guid":{"rendered":"https:\/\/farratanews.online\/apple-is-developing-an-ai-tool-for-animating-images-using-text-prompts\/"},"modified":"2024-02-14T23:08:31","modified_gmt":"2024-02-14T17:38:31","slug":"apple-is-developing-an-ai-tool-for-animating-images-using-text-prompts","status":"publish","type":"post","link":"https:\/\/farratanews.online\/apple-is-developing-an-ai-tool-for-animating-images-using-text-prompts\/","title":{"rendered":"Apple is developing an AI tool for animating images using text prompts"},"content":{"rendered":"

[ad_1]\n<\/p>\n

\n

Researchers at Apple have unveiled Keyframer, a prototype generative AI animation tool that enables users to add motion to 2D images by describing how they should be animated. <\/p>\n<\/div>\n

\n

In a research paper published on February 8th, Apple said that large language models (LLMs) are \u201cunderexplored\u201d in animation despite the potential they\u2019ve shown across other creative mediums like writing and image generation. The LLM-powered Keyframer tool is being pitched as one example of how the technology could be applied.<\/p>\n<\/div>\n

\n

Utilizing OpenAI\u2019s GPT4 as its base model, Keyframer can take Scalable Vector Graphic (SVG) files \u2014 an illustration format that can be resized without interfering with quality \u2014 and generate CSS code to animate the image based on a text prompt. You just upload the image, type something like \u201cmake the stars twinkle,\u201d in the prompt box, and hit generate. Examples provided in the research paper show how a Saturn illustration can transition between background colors, or show stars fading in and out of the foreground.<\/p>\n<\/div>\n

\n
\n

There\u2019s no video available, but these frame-by-frame comparisons are an example of Keyframer\u2019s capabilities.<\/em><\/figcaption>Image: Apple<\/cite><\/p>\n<\/div>\n<\/div>\n
\n

Users can produce multiple animation designs in a single batch, and adjust properties like color codes and animation durations in a separate window. No coding experience is necessary as Keyframer automatically converts these changes into CSS, though the code itself is also fully editable. This description-based approach is much simpler than other forms of AI-generated animation, which typically requires several different applications and some coding experience.<\/p>\n<\/div>\n

\n
\n

Keyframer\u2019s editing tools are fairly limited, but at least you don\u2019t need to understand code to use it.<\/em><\/figcaption>Image: Apple<\/cite><\/p>\n<\/div>\n<\/div>\n
\n

One professional motion designer who took part in Apple\u2019s research said, \u201cPart of me is kind of worried about these tools replacing jobs, because the potential is so high. But I think learning about them and using them as an animator \u2014 it\u2019s just another tool in our toolbox. It\u2019s only going to improve our skills. It\u2019s really exciting stuff.\u201d<\/p>\n<\/div>\n

\n

Still, it has a long way to go. Keyframer isn\u2019t publicly available yet, and the user study within Apple\u2019s research paper comprised just 13 people, who could only use two simple, pre-selected SVG images when experimenting with the tool. <\/p>\n<\/div>\n

\n

Apple was also careful to mention its limitations within the paper, specifying that Keyframer focuses on web-based animations like loading sequences, data visualization, and animated transitions. By contrast, the kind of animation you see in movies and video games is far too complex to produce using descriptions alone \u2014 for now, at least.<\/p>\n<\/div>\n

\n

Keyframer is one of several generative AI innovations that Apple has announced in recent months. In December, the company introduced Human Gaussian Splats (HUGS), which can create animation-ready human avatars from video clips. Last week, Apple also released MGIE, a new AI model that can edit images using text-based descriptions.<\/p>\n<\/div>\n[ad_2]\n","protected":false},"excerpt":{"rendered":"

[ad_1] Researchers at Apple have unveiled Keyframer, a prototype generative AI animation tool that enables users to add motion to 2D images by describing how they should be animated. In a research paper published on February 8th, Apple said that large language models (LLMs) are \u201cunderexplored\u201d in animation despite the potential they\u2019ve shown across other …<\/p>\n","protected":false},"author":1,"featured_media":26968,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"_links":{"self":[{"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/posts\/26967"}],"collection":[{"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/comments?post=26967"}],"version-history":[{"count":0,"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/posts\/26967\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/media\/26968"}],"wp:attachment":[{"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/media?parent=26967"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/categories?post=26967"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/tags?post=26967"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}