A new study warns that workplaces risk being flooded with AI-generated “workslop”, low-quality content that looks like real work but ultimately adds little value.
The term, coined by researchers at BetterUp Labs in collaboration with Stanford’s Social Media Lab, was introduced this week in the Harvard Business Review. Workslop is defined as “AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”
According to the researchers, workslop often turns into an additional burden for teams. Instead of streamlining tasks, the output is typically incomplete, unhelpful, or lacking context, forcing colleagues to spend time interpreting, correcting, or redoing the work.
The findings may also help explain why an estimated 95% of organizations experimenting with AI have yet to see a return on investment.
In an ongoing survey of 1,150 full-time U.S.-based employees, 40% reported receiving workslop from coworkers within the past month, underscoring the growing impact of poor AI usage in professional environments.
Related: Massive Data Leak Exposes 273,000 Bank Transfer Records in India
To address the problem, researchers urge workplace leaders to model responsible AI practices, set clear guidelines for usage, and ensure that AI adoption focuses on intentional, high-value applications.

