ChatGPT 4, fight against climate change and less prejudice

AI got creative in 2022, generating impressive text, video and images from scratch. It is also our best technology prediction for 2023. But in addition to being a source of fascination, it is also a source of fear.

Beyond writing essays and creating pictures, AI will affect every industry, from banking to healthcare, but it’s not without biases, which can prove harmful.

Here’s how AI could evolve in 2023, and what to watch out for.

Chatbots and competition

In early 2022, OpenAI launched DALL-E 2, a deep learning technology that produces images from typed instructions. Google and Meta then rolled out AI that can produce video from text messages.

Just a few weeks ago, OpenAI launched ChatGPT 3, which catapulted itself onto the scene to produce well-researched, eloquent text in command of a short text description.

Now, the next thing to follow, which could come out in 2023, is obviously an update: GPT-4. Like its predecessor, it is said to be able to translate into other languages, summarize and generate text, and answer questions, and include a chatbot.

It will also reportedly have 1 trillion parameters, meaning it would produce more accurate answers even faster.

But Elon Musk, an early creator of OpenAI, has already criticized ChatGPT for refusing to answer questions about specific topics, like the environment, because of how it’s been programmed.

Another thing to watch out for in 2023 is just how other tech giants will respond to the competition.

Google management issued a “code red” when ChatGPT 3 launched over concerns about how it would impact Google’s search engine, according to the New York Times.

Artificial intelligence in business and dealing with world problems

But AI also has the potential to play a role in tackling climate change as it can help companies make sustainability decisions and reduce carbon emissions much easier.

AI is our best technology prediction for 2023 and it is a source of both fascination and fear. -Canva

“This technology can help businesses and governments address this challenge and make the world an environmentally better place for us,” said Ana Paula Assis, general manager for EMEA at IBM.

He told Euronews Next that AI allows for faster decision-making, which is especially needed with an aging population as it “puts a lot of pressure on the skills and capabilities we can have in the market.”

Assis said this is why applying AI for automation has now become “urgent and imperative”.

But artificial intelligence will not only transform business. It can also help doctors make a diagnosis as it aggregates data to calculate symptoms.

It can also help you with banking and loans.

This technology can help companies and governments meet this challenge and make the world an environmentally better place for us.

Credit Mutuel in France has adopted artificial intelligence to support its client advisors to provide better and faster responses to clients. Meanwhile, NatWest in the UK is helping its clients make more informed mortgage decisions.

The demand for AI in enterprises has already increased in 2022 and looks set to grow.

IBM research shows that between the first and second quarters of 2022, there was a 259% increase in job postings in the AI ​​domain, Assis said.

AI and ethics

As the technology is expected to develop in 2023, so are the deeper questions behind the ethics of AI.

While AI can help reduce the impact of human biases, it can also make the problem significantly worse.

Amazon, for example, stopped using a hiring algorithm after it was found to favor applications that used words like “captured” or “polite,” words that were found to be used more on male resumes.

Meanwhile, ChatGPT will not allow you to write a racist blog post, stating that it “is not capable of generating offensive or harmful content”. But it could if you asked it another way that tiptoes around the topic.


As the technology is expected to develop in 2023, so are the deeper questions behind the ethics of AI. -Canva

This biased or harmful and racist content is possible because the AI ​​is trained on hundreds of billions of words and sources pulled from websites and social media.

Another way AI can perpetuate biases is through systems that make decisions based on past training data, such as biased human decisions or historical and societal inequalities. This may also be due to gaps in available data, for example, facial recognition systems which may have taken samples mainly from white men.

The responsibility for fairer and harmless AI, therefore, falls not only on the AI ​​companies that create the tools, but also on the companies that use the technology.

IBM research shows that 74% of companies surveyed said they do not yet have all the capabilities needed to ensure that the data used to train AI systems is not biased.

Another problem is the lack of tools and frameworks to provide companies with the ability to explain and be transparent about how algorithms work.

“These are really the embedding capabilities that we need to see companies perform in order to deliver a fairer, safer, and more secure use of AI,” Assis said.

Leave a Reply

Your email address will not be published. Required fields are marked *