Is AI generative or degenerative?

George Gilder posed this simple question to kick off the COSM Technology Summit during three blustery days this month in Bellevue, Washington.

The conference gathered the best minds in investing and science, from Cathie Woods to Ray Kurzweil, just a few miles from the HQs of Microsoft and Amazon.

Many spoke favorably of the burgeoning science as the ChatGPT-induced fervor has pushed Microsoft’s stock to all-time highs.

Yet, it was tempered with caution and reality.

As George wrote in his latest newsletter, AI has a serious problem.

During the conference, Bob Marks of the Discovery Institute used the “Mona Lisa” to point out an obvious flaw in generative AI that makes it spew out garbage over time.

Like a game of telephone, an AI model trained on the original painting will reproduce a copy with stunning accuracy that’s only off by a fraction of a percent.

Use that new image to train another AI model, and you get an image that changes a little more. Repeat the process enough times and you get something wholly unlike the original.

We used MidJourney to simulate this concept on the “Mona Lisa”.

After 1,000 iterations, we got the following result:

Source: MidJourney.com

The resemblance is there. But we all know it looks off — a bit too clean, too realistic.

Another 1,000 iterations, the image became the following:

Source: MidJourney.com

A lovely picture of a woman, no doubt. However, the liberties taken to make the painting look realistic destroy the very qualities that made the painting unique.

AI models trained on AI data eventually run amuck, resulting in everything from incorrect to completely irrelevant answers.

This problem has already begun to unravel our infant AI systems.

Many AI enthusiasts hated the idea that ChatGPT only trained on data before 2021. But, the later updates from OpenAI illustrate the problem with unconstrained AI usage.

Accuracy for AI models plunged as the system incorporated new articles and images into its training data, a large portion of which was generated by AI. Outputs became nonsensical at times.

Keep in mind, only a fraction of the latest training data was AI generated. As Scientific American pointed out, it only takes a small amount of data to corrupt the output.

This is why many AI inventors expect Elon Musk’s Twitter (X) trained AI to fail miserably.

While free speech might be laudable for ethical and moral reasons, it’s terrible for training AI models. It’s not about getting more data, but about more quality data.

Training an AI model on Twitter feeds is like writing a paper using a stream of consciousness. Somewhere buried in there is an answer. But you’ll have to sift through a ton of useless and often counterproductive information to get there.

The scope of this problem prevents us from using models to train other models.

However, it doesn’t preclude us from exploiting AI for specific usage.

Consider the latest announcement from OpenAI.

The company plans to increase the amount of data you can feed the platform from roughly 3 pages to 300 pages.

A writer trying to put together a presentation or a blog post can’t feed the system enough data with just 3 pages.

But with 300 pages, they can now provide not only the data needed to generate the content, but multiple examples the AI model can use to match tone, style, and cadence.

It’s the equivalent of trying to understand Charles Dickens’ writing style from the preface versus reading all his books.

Yet, there are some obvious, limited developments that would immensely improve our current AI systems.

For example, a financial AI given direct access to the SEC website is far more likely to deliver the right answer than one that has to search the internet for the same data.

Just enabling users to point to specific datasets would make today’s models far more powerful and useful.

While these positive developments move the needle forward, they do not solve the overarching training data problem.

Now, you might expect George left the conference rather despondent.

The talks certainly solidified his belief that AI wasn’t going to revolutionize the world in the way many predict it might, at least not anytime soon.

However, it did confirm his long-held belief that the way to invest in this technology isn’t in the software models themselves but the hardware to enable advancements.

It’s like investing in the power grids rather than trying to pick between coal or solar.

What’s unique about George’s investment thesis is the material he’s eyeballing advances industries beyond artificial intelligence.

Its applications range from healthcare to the military in ways that would truly shape our future.

Everything is detailed in George’s latest report that you can ACCESS HERE.

Wealth Whisperer Team

Recent Posts

Sample Weekday Wrap/Closing Comments

This content is for paid subscribers only. To gain access subscribe to one of our…

1 month ago

Soft Landing Premise Still Driving Bullish Narrative

It is hard to find a seasoned investor who doesn’t believe the stock market is…

6 months ago

Are You Prepared for the Next Market Collapse?

No one believes a financial disaster can strike… until it’s too late. That’s bizarre, considering…

1 year ago

Options Industry Council (OIC) – What is It?

The Options Industry Council is a resource used to educate investors about the benefits and…

1 year ago

Put-Call Parity – Defined and Simplified

The put-call parity is the relationship that exists between put and call prices of the…

1 year ago

Three Cheers for the Magnificent Seven

“It’s not a stock market, it’s a market of stocks.” -- “Maxims of Wall Street,”…

1 year ago