Science/Technology

[ Science/Technology ] [ Main Menu ]


  


7477


Date: May 14, 2024 at 00:18:00
From: ryan, [DNS_Address]
Subject: this is what "we" are using our brains for...omg...

URL: https://thehill.com/policy/technology/4661621-what-to-know-launch-of-gpt-4o-open-ai/


What to know about the launch of GPT-4o
by Lauren Sforza - 05/13/24 7:23 PM ET


OpenAI on Monday launched its latest artificial intelligence (AI) model, GPT-4o, which promises improvements in its text, vision and audio capabilities.

OpenAI unveiled the model during a live demonstration Monday, with Chief Technology Officer Mira Murati saying it is a “huge step forward with the ease of use” of the system. OpenAI’s newest model launched just one day before Google’s annual developer conference scheduled for Tuesday.

Here’s what to know about the launch of GPT-4o.
Improved voice instruction

Users can now show GPT-4o multiple photos and chat with the model about the uploaded image, according to OpenAI.

This can help students work their way through math problems step by step. One of the demonstrations shown during the launch on Monday walks the users through a simple math problem without giving away any answers.

A separate video posted by online instruction company Khan Academy demonstrates how the new model can help teach students in real time. The student shared his screen with him working through the problem in real time as the model guided him through it.
A faster model with improved capabilities

Murati said Monday that GPT-4o provides “GPT-4 level intelligence” that is faster and improves the system’s capabilities across text, vision and audio.

“This is really shifting the paradigm into the future of collaboration, where this interaction becomes much more natural and far, far easier,” she said.

OpenAI said its new model can “respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds.” It noted that this is about the same amount of time it takes for humans to respond in a conversation.

The new model launched Monday

GPT-4o is available starting Monday to all users of OpenAI’s ChatGPT AI chatbot, including those who are using the free version.

“GPT-4o’s text and image capabilities are starting to roll out today in ChatGPT. We are making GPT-4o available in the free tier, and to Plus users with up to 5x higher message limits,” OpenAI wrote in its update Monday.

The new voice mode will come out in the following weeks for ChatGPT Plus users, OpenAI CEO Sam Altman wrote on the social platform X.

The model is ‘natively multimodal’

Altman also posted on X that the model is “natively multimodal,” which means that the model can generate content and understand commands through voice, text or images.

In a separate blog post, he said the new voice and video mode “is the best computer interface” he has ever used.

“It feels like AI from the movies; and it’s still a bit surprising to me that it’s real. Getting to human-level response times and expressiveness turns out to be a big change,” he wrote in Monday’s post.


Responses:
[7484] [7479] [7481] [7482] [7478]


7484


Date: May 15, 2024 at 08:29:44
From: EQF, [DNS_Address]
Subject: Three AI Programs - May 15, 2024


THREE AI PROGRAMS - Posted by EQF on May 15, 2024

PROGRAM 1

My own already existing earthquake forecasting computer program (partly also Roger's) might be regarded as a very early type of AI program. It learns as more and more earthquake information is gradually fed into the program.

It presently contains data for about 120,000 5 and higher magnitudes earthquakes going back to the start of 1973.


PROGRAM 2

This is a program that I have been planning to get developed for probably several decades. It has to do with a highly advanced educational system.

I simply have not had time to contact the people would need to write the code.

It could be extraordinarily important and would likely be used by most of the people on the planet.


PROGRAM 3

This is another potentially vitally important AI program that I would like to try to get developed. It could be thought of as a "Smart Watch" that is 2 generations more advanced than the ones that are presently available.

It would also likely eventually be used by most of the people on the planet.

These are personal opinions.

Regards to all,

EQF


Responses:
None


7479


Date: May 14, 2024 at 11:23:09
From: georg, [DNS_Address]
Subject: Re: this is what "we" are using our brains for...omg...


the world has gone bonkers over what are actually not
very intelligent programming ... humans write code ...
lets not forget that important message here ...
computers are stupid machines ... there is NO artificial
intelligence ... it does not exist now and will never
exist ... and that is religious doctrine ... wake up
people and smell the dung on your running shoes


Responses:
[7481] [7482]


7481


Date: May 14, 2024 at 16:36:16
From: Eve, [DNS_Address]
Subject: Re: this is what "we" are using our brains for...omg...



This world is about to drown in it's own arrogant BS.


Responses:
[7482]


7482


Date: May 14, 2024 at 16:39:09
From: Eve, [DNS_Address]
Subject: Re: this is what "we" are using our brains for...omg...


...For clarification I was inferring to AI dead works of
humans.


Responses:
None


7478


Date: May 14, 2024 at 07:38:01
From: shadow, [DNS_Address]
Subject: Re: this is what "we" are using our brains for...omg...


So dismayed by how many people seem to be pissing
themselves over this technology, sooo excited they are...

"But there are such safe, wonderful applications for
this!"...

*shudder*


Responses:
None


[ Science/Technology ] [ Main Menu ]

Generated by: TalkRec 1.17
    Last Updated: 30-Aug-2013 14:32:46, 80837 Bytes
    Author: Brian Steele