New AI User Defense Pledges Arise on the Back of Copyright Infringement Claims

After companies like Microsoft, Google, Adobe, Tesla, and Github invested an estimated $92 billion in AI R&D in 2022 alone (think a number somewhere

After companies like Microsoft, Google, Adobe, Tesla, and Github invested an estimated $92 billion in AI R&D in 2022 alone (think a number somewhere between the GDP of Bulgaria and Guatemala to put this in perspective), they are now seemingly taking bold stances AGAINST their own innovations.

The latest tech giant to speak up is Google, who claims the company will assist in the defense of anyone accused of copyright infringement as a direct result of using their AI tools.

In doing so, they join Microsoft and Adobe at the forefront of working to protect users from liability arising from the companies’ various AI services, such as ChatGPT and Midjourney.

But is this really a benevolent and wise attempt to create guardrails for users, or a cynical power play to appear proactive while encouraging more use of the companies’ respective AI technologies among the general public?

Is there a better, more logical, or more cost-effective way to arrive at the same goal?

Let’s take a closer look at what Google has said and how it stacks up against what other companies are saying, and doing, about the unprecedented growth and potential dangers inherent in the use of AI.

New AI User Defense Pledges Arise on the Back of Copyright Infringement Claims 31

“Partners”…or patsies?

Google’s stance on the regulated use of AI seems to be very strong: namely, that there should, um, well, maybe BE some. As of this writing, there are vanishingly few legal guidelines for how AI can and cannot be used. This is deeply problematic because a common tenet of law, both in the US and elsewhere, is that if something isn’t explicitly prohibited, it is tacitly permitted. A basic example of this would be if you live in a jurisdiction that permits U-turns anywhere there isn’t a sign specifically stating you can’t. If there’s not a big sign saying you can’t do it, it’s considered to be acceptable and if a police officer stops you on the basis that you did a U-turn in a place where there was no sign, they’re probably overstepping their authority and the laws of the jurisdiction in which they operate.

(Not that you shouldn’t pull over if a cop lights you up! Comply at the time and complain in court–and, as any decent attorney would tell you, otherwise SHUT YOUR FREAKING MOUTH!)

Thus, people are using AI for all manner of things, from attempts to create “art”ificial paintings to writing business emails to cyberattacks on critical US infrastructure to crafting school and college papers to generating deepfake music, videos, and even revenge porn featuring everyone from your favorite celebrities to ordinary people. The potential implications are chilling, and way beyond the scope of what we’re discussing here. The point, in this case, isn’t that AI can be used in these ways. The problem, in this case, is that these companies are taking a user-friendly stance which is seemingly at odds with the truckloads of money they’re shoveling at AI’s bottomless maw. But why?

New AI User Defense Pledges Arise on the Back of Copyright Infringement Claims 32

Google identifies those who use its services as “partners.” That is to say, Google views the end user as just as essential and inextricable a part of Google’s ongoing success as the CEO or the company’s top engineering teams. However, part of being a partner to any Big Tech company is, by definition, using their products as defined and permitted by the company’s terms and conditions. But an estimated 91% of all people, and an astounding 97% of people 18-34, cop to accepting terms and conditions without actually reading them, regardless of the source, product, or service in question. In doing so, they give up key rights to recourse they would otherwise have, or see those rights sharply curtailed later. If you’re reading this and wincing, imagining the last time you were urged to read the Terms of Service or T&Cs before proceeding on a website and then clicked the button that said you had without bothering to even open them (because who’s got that kind of time, given that PayPal’s TOS clocks in at more words and longer to read thanHamlet?), then you’re probably in that demographic.

New AI User Defense Pledges Arise on the Back of Copyright Infringement Claims 33

He didn’t read the TOS either, and look how that turned out for HIM…

The problem is, these companies KNOW users don’t read the T&Cs or TOSs. They know that there’s at BEST 1:10 odds that someone will actually read them carefully all the way through, which is why there have been some eyebrow-raising doozies of conditions salted into TOSs since they became an inescapable part of our everyday digital lives. Seriously, who’s going to use anything you can realistically find on iTunes to make a biological weapon? But because of this, people are largely glossing over important aspects of Google’s announcement, such as the responsibility of the end user to check their facts independently and make sure they’re not intentionally infringing anyone by passing off, say, a short story as their own by making a few AI-assisted tweaks to the copy before Google’s responsibility to help kicks in.

Copyright + The Machine

New AI User Defense Pledges Arise on the Back of Copyright Infringement Claims 34

One unassailable problem with AI is that the training data sets used in ML–that’s “machine learning” for those of us who don’t speak technogeek–are often sourced from murky or even allegedly illegal sources, as well as readily available information on the Internet that you or I might look up every day. This poses a significant problem because much of this information is covered under copyright, leaving both the AI programs themselves and the end users vulnerable to allegations of copyright infringement.

It’s tempting to argue that at least these companies are doing SOMETHING to protect end users from unintentional infringement, and if it was simply a matter of saying, “If you use our AI and you’re accused of copyright infringement as a result, we’ve got your back in court,” I’d say okay, that’s one thing. But it’s not like that. The companies are offering liability protection with its own set of terms and conditions that users are unlikely to bother to read or try to understand. What’s even more ironic is, if the current rash of AI-related lawsuits by writers are any evidence, these companies are offering users assistance in avoiding accusations and liability arising from copyright infringement charges, which stem from the use of copyrighted material which they themselves may have infringed in order to create these selfsame tools, which now create the situation which demands that users be safeguarded from liability in the name of corporate ethics and responsibility, so that the companies can continue to create and adapt new AI largely unchecked.

New AI User Defense Pledges Arise on the Back of Copyright Infringement Claims 35

Makes your head hurt contemplating how this Ouroboros works, doesn’t it?

And all the while, Google is both one of the biggest investors in AI technology and one of the loudest voices proclaiming that AI needs to be strictly regulated and tightly controlled, lest it outstrip our ability to do either by achieving true, self-aware, sentient status. But no matter how many controls are built in, how many big red buttons are hardwired to shut it down, or how many laws are passed to try to keep AI in check, at its heart, AI must always be subservient to its datasets and programming–and that means that those identifying the datasets on which AI is trained should be sourcing their data ethically right from the beginning.

Because of these factors, I’m deeply pessimistic about these companies’ true commitment to copyright protection versus their interest in shielding themselves from public scrutiny by basically yelling, “C’mon in, the water’s fine! See? We even got floaties and life preservers for everyone!” In reality, if these companies had to defend their users ONLY against every iteration of alleged copyright infringement that met with the companies’ various terms and conditions to secure such user defense, it would likely bankrupt them and bring AI development as we know it to a screeching halt. And so far, as far as I know at this time, none of these companies have had to put their money where their mouths are.

Until they do, and I see how this plays out in real life, I’m going to remain skeptical–and I strongly suggest you do too.

Or, at the very least, read the Terms of Use so you know what you’re getting yourself into.

ABOUT JOHN RIZVI, ESQ.

New AI User Defense Pledges Arise on the Back of Copyright Infringement Claims 36

John Rizvi is a Registered and Board Certified Patent Attorney, Adjunct Professor of Intellectual Property Law, best-selling author, and featured speaker on topics of interest to inventors and entrepreneurs (including TEDx).

His books include “Escaping the Gray” and “Think and Grow Rich for Inventors” and have won critical acclaim including an endorsement from Kevin Harrington, one of the original sharks on the hit TV show – Shark Tank, responsible for the successful launch of over 500 products resulting in more than $5 billion in sales worldwide. You can learn more about Professor Rizvi and his patent law practice at www.ThePatentProfessor.com

Follow John Rizvi on Social Media

YouTube: https://www.youtube.com/c/thepatentprofessor
Facebook: https://business.facebook.com/patentprofessor/
Twitter: https://twitter.com/ThePatentProf
Instagram: https://www.instagram.com/thepatentprofessor/

Tell us about your invention

Call Us Now (1-877-Patent-Professor)