OpenAI đź’©'s on BigTech, YouTube "Innovates," and Meta Regulates

Unraveling the latest developments in AI from YouTube, Meta, and OpenAI's.

Welcome back humans.

It’s that time of the week again! Last week I published on Friday but fyi I’ll be publishing weekly on Thursday mornings moving forward. Ok, ok, ok …let's dive into some of the most exciting recent developments in AI sprinkled with my thoughts…

Here’s what you need to know about AI today:

  • YouTube's is releasing new AI-powered features.

  • Meta's actually banning political campaigners from using AI.

  • OpenAI sh*t’s on other LLM’s and BigTech in first big tech showcase.

#1 YouTube's spreading misinformation through AI needs more attention?

Instead they've decided to experiment with generative AI to tackle for content creators and subscribers.

The new package for paid subscribers includes a conversational tool and a comment summarizer.

This tool answers questions about content and makes recommendations while the summarizer will outline the comment sections' key topics.

Kinda cool.

The conversational tool will soon grace youtube.com/new, YouTube's hub for fresh experiments, followed by the summarizer.

If you're a content creator, this might be your thing.

With the comment summarizer, you can quickly understand what your subscribers are discussing and get inspiration for new videos.

Even better, you can remove any comment topics you don't want.

The summarizer only uses published comments and skips ones on review, containing blocked words, or from blocked users.

Only available in English for now, these features are in the test phase.

Google's AI experiments extend from Search to Workspace, consumer apps, and more, including creative AI elements for ads and music.

However, they're starting small with their new YouTube features, slowly expanding based on feedback.

Not every tool proves to be a hit right off the bat. After all, it's all about trial and error.

đź’ˇMy take:

🥱 Snooze. This is a bit boring considering the size of YouTube, the resources it has, and most importantly that they were a huge source of misinformation during the last election. Call me crazy but I just think that launching creator tools is an easy way for the company to be involved in AI without tackling the hard problems which literally impact our society. To take it a step further it’s not just about misinformation during political cycles. It’s about the infusion of misinformation into our everyday lives.

I’ve head from dozens of people how the information on YouTube has brainwashed an elderly parent, someone who doesn’t realize that what they are watching isn’t in fact news, it's someone’s personal views presented as news and facts. My take is simple: spend more time experimenting with AI for misinformation on YouTube, not superficial creator tools.

#2 AI and politics? A dicey duo, according to Meta.

Gif by markvomit on Giphy

Speaking of misinformation Meta just recently stopped political campaigners from using its generative AI ad tools.

Their objective is to stop the spread of election misinformation. 🛑

The ban also applies to other "regulated industries."

Meta's AI ad tools can create backgrounds, adjust images, and tweak ad copy in response to text prompts.

They're handy, but potentially misused.

Enter the "prohibited list": housing, employment, credit, social issues, elections, politics, health, pharmaceuticals, and financial services.

This move is to allow Meta to evaluate the risks and devise appropriate safeguards.

It's not just Meta doing the hard yards.

Google, the biggest digital ad company out there, is developing similar AI tools.

They're keeping politics well out of it, by blocking political keywords as prompts.

They'll also require election-related ads to disclose if they have any "synthetic content."

Even Snapchat is on it, fact-checking all political ads and blocking them in their AI chatbot.

On the other side, TikTok bars all political ads.

AI is here, it's powerful, but the internet giants are trying to keep it from meddling in politics.

Still, it's food for thought.

AI's potential for good is huge, but so is its potential for misuse. Especially when it comes to politics.

đź’ˇMy take:

+1 for Meta on this. I love the ability to disclose if “synthetic content” has been used but I don’t fully trust that bad actors will optin and share this information. Afterall their whole goal is to mislead! I’d love to see someone take this a step forward.

Related but unrelated: I think there is a gigantic opportunity for founders to build and AI tool for misinformation, synthetic content, and synthetic media. The use case is obvious for big organizations but I also think there is a consumer component here as well. Generally, we all want to know if what we are reading/watching/listening to is legit. A tool like this can easily shift the power into the public’s hands.

#3 OpenAI sh*t’s on BigTech’s AI efforts.

Nicki Minaj GIF

Giphy

If you’re catching up on AI news and haven’t heard about OpenAI’s developer conference here’s a recap:

It was a grand affair, drawing in over 900 developers and software enthusiasts.

The primary focus?

Their future vision for artificial intelligence.

ChatGPT has already made a significant name for itself in less than a year.

With over 100 million active users weekly and a developer community reaching 2 million, it's certainly a force to be reckoned with.

During the event, the company introduced GPT-4 Turbo, a new, improved version of their AI model.

This one can retrieve information as recent as April 2023, a massive upgrade from previous versions that couldn't answer anything after 2021.

Not just that, they also showcased a version of AI named GPT-4V.

What does the "V" stand for?

Vision.

This AI model can analyze images and describe them, a functionality that can be very useful for the visually impaired.

But the rollouts don't stop there.

OpenAI introduced a new product line called GPTs, allowing users to create task-specific versions of their chatbot.

However, the AI giant isn't without its drawbacks.

Alyssa Hwang, a computer science researcher, pointed out a flaw where the chatbot confused steak for chicken noodle soup due to misleading captions.

This, she warns, could lead to adversarial attacks.

But OpenAI seems to be ahead of the game, having given researchers early access to discover these flaws before the official release.

It's all part of their "gradual iterative deployment" approach that helps address safety risks.

Despite these remarkable advancements, OpenAI is not without competition.

Microsoft's Bing, Bard from Google, Claude from Anthropic, all built using OpenAI's technology, are proving to be tough competitors.

Oh and let’s not forget Grok, lol. The new player on the block, released by Elon Musk, which promises to answer "spicy questions".

Upon being asked about the release of Grok, Openai CEO Sam Altman casually said, "Elon's gonna Elon."

Overall, the future of AI is poised for tremendous growth and potential.

But as new players enter the scene, the question stands: can Google ever catch up to OpenAI?

đź’ˇMy take:

Regardless of Google’s size they are going to be hard pressed to catch up to OpenAI. I mean… they’ve been trying but this is a classic case of a big company doing big company tings and falling asleep at the wheel. It’s not just Google it’s really all the large tech player with the exception of Microsoft and their foresight in even investing in OpenAI to begin with. Tech aside what I love most about watching this all play out is Altman’s highly strategic way of building and capitalizing on OpenAI’s momentum with ChatGPT. It’s giving Steve Jobs in all the best ways, including an expert roll out of their developer conference. Optics are often overlooked but they aren’t here.

đź’©Sh*ts & Giggles

That’s all for today folks!

See you on the interwebs,

AB