OpenAI is facing fresh scrutiny regarding its data sourcing methods. A new report indicates the company has asked contractors to upload real work from past employment, potentially exposing sensitive corporate data.
This strategy aims to train models on complex, high-quality professional tasks. However, legal experts are sounding the alarm. One intellectual property lawyer described the move as “putting itself at great risk,” pointing to significant privacy violations and intellectual property theft concerns. Using proprietary code or documents without explicit owner permission creates a massive liability.
This revelation highlights the aggressive lengths AI companies are going to to secure specialized training data, often skirting ethical and legal boundaries in the process.
Leave a Reply