Artificial intelligence or “AI” traces its roots to Alan Turing’s work in the 1950s. While it’s a current hot topic, AI is far from new. Students and teachers have been using AI in some form for years in everyday life, such as when they use a smart phone’s search feature to find a photo of their cat, listen to Spotify playlists, or use Duolingo to learn a new language. AI is also being incorporated into many Ed Tech products, such as Dreambox Learning for math.
Most current focus on AI has been on generative AI or “gen AI,” which powers large language models like ChatGPT. Gen AI can autonomously generate new content, such as text, images, audio, video, and software code, by learning from existing data patterns. To do that, it needs to ingest huge quantities of data. That data is then further refined and “tuned.” And it may continue to learn from what it is provided as a “prompt.”
Most schools have been focusing on the potential plagiarism implications of students using gen AI products. This article focuses on two other considerations when using AI—intellectual property and privacy.
IP Considerations in AI
Intellectual property most pertinent to K-12 schools includes trademarks and copyrights. Roughly speaking, trademarks, as a signifier of the source of goods and services, protect a brand owner’s identity and goodwill (and protect consumers from confusion), while copyrights protect original and creative expression fixed in a tangible form. Whether they realize it or not, every K-12 institution has and uses intellectual property. And intellectual property rights can be impacted through inappropriate use of AI.
The implications for copyright are currently playing themselves out in the federal courts. Several lawsuits are pending against various AI platforms for their alleged infringement of copyright via ingestion of copyrighted works during the training process. For instance, visual artists brought a class action lawsuit against the owner/operator of Stable Diffusion AI, alleging copyright infringement by the platform’s use of five billion copyrighted images as training data. In another lawsuit, writers brought suit against OpenAI, alleging copyright infringement by its use of their books as training materials. These and other suits alleging infringement via use of copyrighted works in training data remain pending. What has yet to weave its way through the courts are suits for infringement of copyright by AI output, but those suits are surely coming. Absent implementation of careful AI policies, K-12 institutions could find themselves unwittingly subject to a copyright infringement claim via content generated by educators using AI; the education exception does not exempt K-12 institutions from all copyright claims.
As to trademarks, even very basic AI platforms are capable of generating convincing brand dupes with just a few user prompts. Take, for instance, Copilot, which can generate seemingly authentic school logos in a matter of seconds. Via cheap print-on-demand services, one could turn such a design into unauthorized merch with little to no capital. Schools can and do face issues when unauthorized individuals use school branding without permission, causing consumers to mistakenly believe the school authorizes a group or a message that it does not. Imagine a public school’s branding used on t-shirts featuring religious messages that the school, for obvious reasons, cannot be associated with.
Privacy Considerations in AI
There is a substantial overlap between privacy governance and AI governance. In both fields, there is a balance between what is strictly legally required, how generally applicable laws may impact compliance, and best practices. Contractual obligations also play a role.
While training data is one of several IP concerns, this is the primary concern for privacy compliance. Many schools are focused on only the Family Educational Rights and Privacy Act (FERPA) for privacy compliance. However, FERPA is not the only privacy-related law that may impact schools and Ed Tech products. Under existing state laws, businesses that sell AI products (or AI-enhanced products) must protect personal data and disclose all uses and sharing of that personal data. Some of these protections fall under general privacy or consumer protection laws while others are specific to AI or Ed Tech.
With rapidly changing legal constructs at the state, federal, and international levels, schools need to try to anticipate what their obligations to students and staff may be over the coming months and years. Integrating AI governance with privacy governance helps K-12 schools assess risk and apply appropriate frameworks to manage compliance challenges.
Key Takeaways
Students, teachers, and school administrators at K-12 schools may use AI products. Just as schools want to ensure students learn how to properly engage with AI tools, staff also must be trained and monitored in the use of AI tools. The legal landscape around AI is constantly shifting so check in with legal counsel on the latest developments. If you need help reviewing your school’s policies, practices, and contracts, reach out to our knowledgeable privacy & data security team or education team.
This article is provided for informational purposes only—it does not constitute legal advice and does not create an attorney-client relationship between the firm and the reader. Readers should consult legal counsel before taking action relating to the subject matter of this article.