{"id":9761,"date":"2023-10-30T08:59:27","date_gmt":"2023-10-30T03:29:27","guid":{"rendered":"https:\/\/farratanews.online\/openais-chatgpt-can-look-at-uploaded-files-in-the-latest-beta-updates\/"},"modified":"2023-10-30T08:59:27","modified_gmt":"2023-10-30T03:29:27","slug":"openais-chatgpt-can-look-at-uploaded-files-in-the-latest-beta-updates","status":"publish","type":"post","link":"https:\/\/farratanews.online\/openais-chatgpt-can-look-at-uploaded-files-in-the-latest-beta-updates\/","title":{"rendered":"OpenAI\u2019s ChatGPT can look at uploaded files in the latest beta updates"},"content":{"rendered":"

[ad_1]\n<\/p>\n

\n

OpenAI is rolling out new beta features for ChatGPT Plus members right now. Subscribers have reported that the update includes the ability to upload files and work with them, as well as multimodal support. Basically, users won\u2019t have to select modes like Browse with Bing from the GPT-4 dropdown \u2014 it will instead guess what they want based on context. <\/p>\n<\/div>\n

\n

The new features bring a dash of the office features offered by its ChatGPT Enterprise plan to the standalone individual chatbot subscription. I don\u2019t seem to have the multimodal update on my own Plus plan, but I was able to test out the Advanced Data Analysis feature, which seems to work about as expected. Once the file is fed to ChatGPT, it takes a few moments to digest the file before it\u2019s ready to work with it, and then the chatbot can do things like summarize data, answer questions, or generate data visualizations based on prompts.<\/p>\n<\/div>\n

\n

The chatbot isn\u2019t limited to just text files. On Threads, a user posted screenshots of a conversation wherein they uploaded an image of a capybara and asked ChatGPT to, through DALL-E 3, create a Pixar-style image based on it. They then iterated on the first image\u2019s concept by uploading another image \u2014 this time of a wiggly skateboard \u2014 and asking it to insert that image. For some reason, it put a hat on it, too?<\/p>\n<\/div>\n