Published :
5 minute read

Meta faces backlash after tracking employee clicks and keystrokes to train AI tools

Meta faces criticism after launching employee computer tracking software that records clicks keystrokes and screenshots to train artificial intelligence systems

Meta has sparked a fresh controversy after reports revealed the company is rolling out software on employee work computers that monitors clicks, keystrokes and screenshots to help train artificial intelligence systems. The move has triggered criticism from workers and social media users, with many calling it invasive workplace surveillance.

The latest development comes at a time when Meta Platforms is aggressively expanding its AI ambitions through investments in data centers, talent and new automation tools. While the company says the program is focused on improving AI capabilities, critics argue it raises serious concerns about privacy, consent and the future of human jobs inside large technology firms.

Meta’s new tracking program aims to teach AI how humans work

According to reports cited by Reuters, the initiative is part of an internal project known as the Model Capability Initiative, or MCI. The goal is to gather real world behavioural data from employees using workplace computers and then use that information to train AI systems that can perform digital office tasks more effectively.

The software reportedly records how workers interact with approved business applications and websites. This includes mouse clicks, keyboard inputs, navigation patterns and screenshots of activity during work sessions.

Meta believes such data can help create smarter AI agents capable of handling common office work such as filling forms, sending emails, managing repetitive workflows and navigating software systems with minimal human involvement.

In simple terms, the company wants AI to learn by observing how employees complete everyday tasks on computers.

Meta says data is for AI training, not employee performance reviews

The company has reportedly told employees that the monitoring system is meant only for AI development and not for evaluating staff productivity or performance. Meta has also said safeguards are in place to protect sensitive or confidential information collected during the process.

However, reports suggest participation is mandatory for workers using Meta issued computers in the United States, with no option to opt out.

That detail has become one of the biggest points of criticism. Privacy advocates and employees say even if the purpose is AI training, mandatory monitoring creates discomfort and could damage trust between workers and management.

Employee concerns grow over privacy and surveillance

Internal reactions were swift after news of the rollout surfaced. Reports cited comments from employees who questioned why they had no choice in the matter and whether such monitoring could later be expanded for management oversight.

One employee reportedly said the initiative made them “super uncomfortable” and asked how to opt out.

For many workers, the issue goes beyond data collection. It reflects wider anxiety across the technology sector, where companies are embracing AI while also reducing staff numbers and restructuring teams.

The concern is that systems trained using employee behaviour today could eventually replace some of the same roles tomorrow.

Internet users criticize Meta on social media

Public reaction online has been equally sharp. Users on platforms such as X Corp. and Reddit mocked the idea of AI learning from human office habits.

Some joked that future bots would merely move the mouse every few minutes to appear active. Others suggested AI trained this way might learn unproductive workplace habits instead of improving efficiency.

Many users described the program as surveillance packaged as innovation. Others said monitoring on company devices is already common in many workplaces, and Meta’s rollout is simply a more advanced version tied to AI development.

Still, a common question repeated online was where the boundary lies between legitimate workplace monitoring and intrusive data harvesting.

Bigger AI push at Meta forms the backdrop

The controversy arrives during a major transformation inside Meta. The company has committed billions of dollars toward artificial intelligence infrastructure, including advanced chips, data centers and research teams.

Chief executive Mark Zuckerberg has repeatedly signaled that AI assistants and autonomous software tools will play a larger role across Meta’s products and internal operations.

Industry analysts say companies racing to dominate AI increasingly need large volumes of real human interaction data to make systems more useful. That includes learning how people solve tasks, respond to software prompts and manage digital workloads.

Meta’s latest initiative appears designed to secure exactly that kind of training material.

Why the backlash matters beyond Meta

The debate is significant because it highlights a challenge many companies may soon face. Businesses want smarter AI tools that can automate routine work, but workers are becoming more cautious about how their behaviour is recorded and used.

If other major employers adopt similar systems, questions around transparency, employee rights and informed consent could intensify.

Legal experts also note that workplace monitoring rules vary by country and region, meaning practices acceptable in one market may face resistance or regulation elsewhere.

For now, the rollout is reportedly focused mainly on US based employees and selected applications, but the broader implications are global.

The future of work may depend on trust

Meta’s AI tracking program shows how rapidly the modern workplace is changing. Companies see automation as the next productivity frontier, while employees increasingly want clarity on how technology is shaping their daily work lives.

Training AI through real human computer activity may help build more capable digital assistants. But if workers feel watched rather than supported, resistance is likely to grow.

For Meta, the challenge now is not only building advanced AI systems but convincing employees and the public that innovation does not have to come at the cost of trust, privacy and dignity at work.

Khogendra Rupini Author Profile
VOICES FROM AUTHOR

Khogendra Rupini

Khogendra Rupini is a full-stack developer and independent news writer, and the founder and CEO of Levoric Learn. His journalism is grounded in verified information and factual accuracy, with reporting informed by reputable sources and careful analysis rather than live or speculative updates. He covers technology, artificial intelligence, cybersecurity, and global affairs, producing clear, well-contextualized articles that emphasize credibility, precision, and public relevance.

Founder & CEO, Levoric Learn Editorial and Technology Analysis
or
or

Edit Profile

Contact Khogendra Rupini

Are you looking for an experienced developer to bring your website to life, tackle technical challenges, fix bugs, or enhance functionality? Look no further.

I specialize in building professional, high-performing, and user-friendly websites designed to meet your unique needs. Whether it's creating custom JavaScript components, solving complex JS problems, or designing responsive layouts that look stunning on both small screens and desktops, I can collaborate with you.

Get in Touch

Email: contact@khogendrarupini.com

Phone: +91 8837431044

Create something exceptional with us. Contact us today