AI Workslop

Working with GenAI these past few years has been a hoot, it has helped me with many aspects of my work. I can make website mockups, draft reports, and generally communicate what I want through natural language conversations faster and more easily then I could before. However, I've also been noticing some negatives from extensive use of these tools professionally and personally. Quite often the draft version of something I've created takes twice as long to polish or perfect, while the chances of there being some small error in my work that go unnoticed through my personal quality checks has increased.

Why that is the case is what I'm writing about here. You see, providing context to GenAI and having excellent prompts can help you produce a higher quality output, but there are many jobs I do that cannot be completed in one sitting, or with a single chat interface. Strategy reports, technical assessments, behavioural audits, and consumer insights research often happen over many days, and usually involve colleagues, clients, and sometimes customers. The context or background collected from all these interactions is easy to save as a series of pdf's and upload to a new chat. I'll typically provide previous examples of work I've completed for the tool to use as a template. A typical prompt will summarize to something like "Produce a report using what I've provided and use "file.pdf" as a template for your output. For anyone who's followed a similar approach, you'll know that what you get is often much longer then your original, GenAI's are wordy, which creates headaches for those to whom this work is handed to for quality checking or end-use.

This headache now has a name, it's been called workslop. AI generated content that is actually unhelpful, incomplete, or missing context about the work at hand. Considering that it can take sometime to properly load in the context of a complex project, it's not surprising that we wouldn't provide full context of our last conversation in a new chat, or that sometimes that context can be so expansive that we just don't know how.

AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task. -- Harvard Business Review
New engineering role: Vibe coding quality assurance engineer

According to HBR research employees in the US reported spending an average of one hour and 56 minutes dealing with each instance of workslop. It's not the junior employee or the work producer who suffers through this, it's the senior colleague, the next team receiving the work, or the manager. Workslop like this has affected 41% of employees, may cost $186 per month, per employee or over over $9 million per year in lost productivity. Adopting GenAI within companies is obviously not a costless productivity booster.

Workslop is also impacting our view of our colleagues. If you share workslop with your colleagues they are likely to view you as less creative, capable, and reliable. Forty-two percent are likely to see you as less trustworthy, and 37% will think you are less intelligent, in a nutshell, sending workslop will cause others to view you as less competent. And your colleagues are also more likely to report incidents of this colleagues and managers, and report being less likely to want to work with you again.

Why is this happening?

Many people simply want an easier way to accomplish their goals, in the workplace there here has always been sloppy work. Many people often procrastinate, especially if we are under constant time pressure, sometimes it can be easier to take the AI shortcut, and outsource work we are not that passionate about to a bot, and not think so carefully about its output before handing it off. Gen AI gives us a new way to lean into those bad old habits—but now with the added cost of creating more work for our colleagues and undermining collaboration, and our personal reputations. From a company culture point of view, there are three likely reasons this happens:

  1. Missing Guardrails. Leaders haven't aligned AI with strategy, meaning there few if any cultural norms guiding how we collaborate with it, how it gets used with customers, or how we interact with it (i.e., cybersecurity policies)
  2. Mindsets. Not all employees are the same. Early adopters use GenAI 75% more than the early majority, and while early adopters are more likely to use AI to purposefully achieve their goals (by enhancing their own creativity, the early majority are much more likely to use GenAI to avoid doing work. This is the group where new cultural norms will be valuable.
  3. Collaboration. Prompting skills are needed to work well with GenAI especially for complex work— giving prompts, offering feedback, describing context — are much better when done collaboratively, today many employees do this work individually and then share their outputs. Working alone with GenAI without involving my business partner produces workslop that slows us down by causing re-work for both of us. Better prompting could help, but creating presentations is a team sport, it usually takes us more then one day or sitting to create a new presentation when we start alone, but when we brainstorm together first with GenAI, we are much faster and produce much less rework.

What can leaders do about it?

Ensure leaders are aligned on vision and strategy. Consider a banking strategic challenge statement: 

How might we achieve our 5-year goals using GenAI to ensure our customers become financially better off? Approaches like this provide high-level constraints on where usage and innovation should focus

Ensure usage of GenAI is value aligned by shaping new cultural norms around its use

  1. Might managers and teams need new incentives or KPIs to improve quality and reduce re-work among their teams
  2. New collaboration rituals for brainstorming and quality checking can be useful. Earlier this year, we ran a prioritization exercise for a group of senior leaders, it was incredible. The diversity of industry expertise and international experience allowed us to think through priorities rapidly, we probed data explored differences of opinion and ran simple forecasts before drafting the roadmap.
  3. New communication tools like checklists for peer or manager review -- like a pilots takeoff safety checklist-- reduce compliance effort and reduce errors by reducing what the pilot has to remember to check.

Draft policies around acceptable use and best practices can also help positively shape cultural norms and employee behaviour 

  1. What company information, if any, can be uploaded to personal GenAI accounts?
  2. Should new multi-employee project work begin with GenAI supported brainstorming meetings? What information should always/never be included?

It's been three years now since GenAI became mainstream. From my corner of South-East Asia, motivation to adopt and use GenAI to drive growth and reduce costs is high, employees want it, leaders want it. Yet most businesses are beginning to struggle with unexpected challenges, like workslop, who knew that the thing that promised to reduce our administrative work could actually end up giving us even more work??! For a time, the amount of AI workslop even forced me to take a break from social media. Like any technology though, we need to find ways to make it work best for us, following these three leadership points will go a long way to helping us adapt and genuinely more productive.