The ways in which AI has impacted our digital lives lately often feel personal. From the “RAMpocalypse” to hardware giants pivoting toward AI datacenters, PC gaming has become a significantly less affordable hobby for the average enthusiast. At Digital Tech Explorer, we’ve watched as search engines rewrite headlines and scrape original reporting into “AI overviews,” threatening the very existence of independent tech journalism. Even advancements like DLSS 5 have left some developers wary, as the technology sometimes adds a layer of generic AI gloss over handcrafted art.
These are industry-wide shifts I track closely, but the issue became individual two weeks ago when I discovered an AI company was marketing a product built on my professional identity without my consent.
When Grammarly Cloned My Identity Without Permission
In this rapidly evolving landscape of 2026, tech hubs like San Francisco have become a unique kind of dystopia. The city remains beautiful, but the atmosphere is saturated with machine learning jargon. While sitting in a local coffee shop, I found myself surrounded by conversations regarding LLMs and “agentic potential.” It was in this setting, while scrolling through my feed, that I encountered a report regarding an AI company turning journalists into “automated editors.”
One name on the list of “experts” stood out: My own.
“Wait, what exactly is happening here?”
— TechTalesLeo, Digital Tech Explorer
A screenshot of the now-disabled AI “expert” tool featuring my professional persona.Grammarly, the ubiquitous proofreading application that recently rebranded itself as an AI-first company called Superhuman, had quietly rolled out a feature seven months prior. This tool offered to “review” user writing using the voices of various “experts,” ranging from literary icons like Stephen King to tech journalists like myself. The implementation of this tool was problematic for several reasons:
Issue
Description
Lack of Consent
The company did not notify the “experts” that their professional identities were being cloned.
Data Scraping
The tool presumably scraped entire bodies of work to train its LLM on specific writing styles.
Quality of Output
The resulting “expert advice” was often generic, repetitive, and lacked the nuance of real human editing.
As investigative reports highlighted, the software claimed to take “inspiration” from renowned authors and sociologists. However, the actual utility was minimal, often offering surface-level guidance such as “replace repetition with varied sentence patterns.”
The fundamental flaw in these AI models is that they only have access to finished, published material. They cannot replicate the invisible process of journalism—the deletions, the fact-checking, and the structural pivots made during a real edit. As other editors have noted, these AI clones often suggest adding context that a human editor would have intentionally streamlined for clarity.
Discovering that your professional voice has been synthesized for profit is a jarring experience. Superhuman, having raised over $1 billion for AI investment, was essentially monetizing my career path without a single conversation.
The abstraction of identity in the age of generative AI.
AI presents an existential challenge to anyone making a living through creative software solutions and original content.
The initial corporate response was to offer an “opt-out” email address, rather than proactively seeking permission. This lack of transparency is exactly why Digital Tech Explorer advocates for more rigorous standards in AI development. It is also the reason behind the current class-action lawsuits facing the industry.
GPTZero’s Proposal: Putting a Price Tag on My AI Persona
One might assume the backlash against Grammarly would serve as a warning. Instead, it created a market vacancy. Shortly after the controversy broke, I was contacted by GPTZero. They weren’t looking for a comment on the situation; they were looking to license my identity.
Their pitch was simple: by “defining core principles” and “reviewing sample outputs,” I could become an official “AI Expert” within their suite of tools. This platform, which includes an AI Detector designed to identify content from ChatGPT and GPT-5, is marketed as a way to “make every word worth reading.”
The dashboard where human expertise is converted into AI templates.
While GPTZero attempts to focus on the process of editing, the underlying technology still relies on models like GPT-4.1. These models are trained on a massive corpus of data, much of which is currently the subject of legal disputes involving major publishers and authors like George R.R. Martin.
The offer for my “persona” was a one-time fee of $2,000. For that amount, I would help train a template that would then perpetually replicate my “voice” for their subscribers.
The True Cost of AI’s Expansion
Industry leaders continue to navigate the legal complexities of training data and fair use.
While some legal frameworks have categorized this type of data ingestion as fair use, the ethical reality remains complicated. If the world’s most valuable tech companies can use human-generated hardware reviews and coding scripts for free to build multi-billion dollar products, the value of original human craft is systematically undermined.
At Digital Tech Explorer, we believe technology should be an educational and empowering tool. However, when AI companies argue for the right to consume all human writing while simultaneously gatekeeping their own data, it reveals a significant imbalance.
The suggestion that a career spent refining the craft of technology storytelling can be distilled into a template for a small one-time fee is a sobering reminder of where the industry stands. Protecting intellectual identity is no longer just a legal hurdle; it is a fight for the future of professional expertise in the digital age.