Ian Mulvany

October 16, 2023

Demystifying AI in the workplace

Last week I took part in an internal discussion with the CIO Carolyn Brown from the BMA where we talked all things AI, and how we think about its possible impact on the workplace. I think we had about 120 folk listen in, and we recorded the session in case anyone wants to catchup. The discussion was nicely moderated by Gordon Fletcher, and we called the session “Demystifying AI”. We didn’t unpack how it works, but focused more on the implications of this generations of tools in the workplace. I was pretty happy with the discussion points we got into, so I’ve decided to paraphrase some of my own thoughts to the questions that came up here. As I read though the questions I’m probably going to answer somewhat as a variation on what I answered on the day, writing in retrospect always gives a slightly different perspective.

Overall my main reason for taking part was to get across the importance of taking the change that is happening seriously. I wanted folk to understand that things have really changed, and they need to pay attention here. It might end up looking like a batch of features that get added to the tools we use every day (e.g. Co-Pilot for Windows, Duet for Google Workspace), but the more you figure out how to use them, the more valuable they will be to you.

One other thing I wanted to get across is the point that “AI” doesn’t do things. We hear “will AI take my job?”. AI has no agency, people and organisations have agency, but these tools do shift the economics of certain kinds of work, and we all have a responsibly to understand that, and to equip ourselves to make the most of that.

The session started with some comments from experts, including Ethan Mollick, I’ve found his blog on this topic to be really helpful in framing my own thinking, you can find it here - https://www.oneusefulthing.org.

OK, on to the questions, and what I can recall of my own responses:

Q: In the video, we heard from Ethan Mollick, a researcher at the University of Pennsylvania, who studies the effects of artificial intelligence on work and education. He talks about the pressure to use AI or feel left behind. Do you feel this pressure and what are the potential consequences of throwing ourselves into using tools like Chat GPT for the quality of our work and our productivity?

This question breaks down into three parts, how much pressure am I feeling about the adoption of these tools, what do I think about their impact on quality and productivity.

Talking about feeling pressure, I have to say I do feel that, probably coming from all of the hype that is out there. In the same week as this session I was at conference by both IBM and Google, and they are pushing this topic hard, everyone is. We are in a moment of high uncertainty, and that opens the door for vendors to begin conversations with potential clients, in a way we have not seen for some years. There is a bit of a gold rush going on. Added to that my own experiences have been so positive that I also thing there is something significant here. I do feel a lot of pressure, but I have to balance that with an understanding that we are just at the start. There is a risk of jumping in too soon, so I think taking scaled bets is a very good way forward. What is true is that these things are not going to go away. They are here, they are highly likely to get better, and so there will be a lot of change.

In terms of productivity, I think there are real productivity gains to be hand. At BMJ we are seeing definite advantages in some aspects of how folk are using the tools. These range from: - Software engineering, where some folk are getting efficiencies in writing documentation and tests. - There have been multiple hours of benefit per week in some areas of data analysis tasks - Some of our folk have been able to write macros for salesforce, a skill that was slightly beyond them before using these tools.

We have quite a few other examples as well, but so far we are mostly using the tools in an off-the shelf kind of way. I think there is a lot of potential in using them in targeted areas of workflow.

In terms of overall productivity there are three ideas that kick around in my head, and I’m not exactly sure how they all fit together.

The first is around labor displacement. Some people worry that these tools will take away all of the work, this is sometime called the lump of labor fallacy. What we have seen happen historically is that as we get more effect at doing certain things, the demand for those things rises (also called Jevin’s paradox). So I think there will be productivity gains, but those gains will not erode the amount of work that there is to be done.

The second idea is one that a friend of mine - Julia Lane - left me with. She is a labor economist at NYU. If we work in a knowledge economy, then the value of the economy must be a function of the knowledge in the economy - i.e. - the amount of high quality thoughts that we can exchange. If these tools free us up to think more and better thoughts, then that should lead to increasing GDP.

Both of those ideas though, are balanced by this next thought. The great liberator of our time was supposed to be email. Email was going to take away much of the burden of paper based office work. I don’t thing that most people would tell you that they think that email has been a panacea, so I do worry that these tools might end up just being an amplifier of ourselves. They may make those who like to be busy, busier still, and those who like to skim details, skim harder, and those who get caught in dead ends, end up running down ever increasingly complex corridors.

This comes back to agency, right? These tools won’t do things. It will all hang on what we will choose to do with these tools.

So, finally, on the quality question. For BMJ this is so simple. We stand by the quality of what we publish. These tools can create content that looks plausible, and if we get lazy and let things go out into the world without taking responsibility for what we put out there, that is an existential risk to our reputation. That’s why we have now put strict governance around this in place in BMJ. I’d say to that each of us working in the business has a responsibility to educate ourselves around the tools we use. I think you have a responsibility to understand what these tools can do, and to use them responsibly.

That leads nicely to the next question:

As the BMJ’s CTO, what work are you doing to mitigate the risks of colleagues jumping headlong into the use of AI?

We are in a capacity overhang, which means that we don’t fully understand what these tools can do, nor how to make the most of them. To figure that out we need as many folk as we can using them, in a fully responsible way. What that has meant at BMJ is that we have gotten board approach for a specific strategic approach to the introduction of these tools that has included creating a governance process. That processes is headed up by our head of legal, and I, and a small set of subject experts from across disciplines, sit on that group.

We look for three kinds of risk in use cases that we approve across the business:

  • Data privacy issues.
  • Workflow substitution (introducing these tools into any decision making process).
  • Content that goes out to the public, and the risk of misinformation.
We work with colleagues to share best practice, and what we are learning.

I think you have to realise as well - there are going to be a lot of bad actors out there who are not going to have any qualms about using these tools for nefarious purposes, e.g. creating manipulative systems, increasing spam, generating fake research papers. I like to think that there are those of us who are working on the side of good, and we have a responsibility to make these tools work to the benefit of society, and to share what we are learning.

AI offers the opportunity to analyse huge amounts of data and extract insight that can help our businesses grow. But how do we do this in a way that respects data privacy particularly with respect to data protection legislation like GDPR?

This is easy, observe the requirements of GDPR, and run the workloads in a safe environment. We have been looking at Google VertexAI for that, and IBMs platform is also compliant in this way as well.

Another area of ethical concern is how we ensure our use of AI does not reinforce bias. We were discussing the concept of representational harms on out AI teams group a few weeks ago. The discussion was prompted by a video that showed the in-built biases in the AI image generation app -Midjourney. If you ask it to generate a picture of a CEO, you get an older white guy. Ask it to generate a drug dealer and you get a younger person of colour. How do we protect ourselves from these biases when developing our own processes involving AI?

I think we have to understand where those biases come from, inform ourselves around what we want to employ these tools for, and if we get to the position of deploying them in decision making processes, ensure that there is decisioning governance in place. That can be done in a programmatic way, but to be honest, we have not reached the point of sophistication to put that in place within BMJ yet, but I could see that forming part of what we experiment with next year.

What do you see as the biggest opportunities for the use of AI to benefit the BMJ?

There are three classes of benefit that I see coming along here. Addressing the problem of fake papers is something that is an existential threat to what our industry does. While it’s not an enormous problem for the BMJ journals at this point in time, I think we have to look closely at this. There are also a lot of ideas that can be applied to make papers better, and not just worry about them being fake.

We have also triaged about 120 ideas across 30 or so idea clusters. We think there is a lot of opportunity around specific workflow enhancement at the editorial stage. That will need to be looked at with care, but for example, could we accelerate the process of stats checks on papers?

What steps still need to be put in place for you to realise that?

We need two things. We need to prototype and test some of these ideas with deployed code around how we interact with those workflow. At the same time we need to understand if there is real value to be created there, and how that value can be used to create a sustainable development support model for those ideas. We have a habit of spending to build features or processes, and not understanding the long term cost of maintenance, so any ideas we come up have to be able to wash their own face, so to speak.

Finally what role do ordinary employees have to play in ensuring we see the benefits of AI but also mitigate the risks?

I ended up by saying learn about the tools, read Ethan’s blog, take some responsibility to educate yourself.

It was a great session, and after the session a few other questions came up, which I’ll answer here:

How do we stop confidential or business sensitive information leaking through the use of AI tools?

The way to do this is to use private environments. For example Googols Vertex AI runs your models in a VPN, with no data egress. We need folk to learn about the different modes that these tools can operate in, and to work with legal to make sure that the Ts and Cs for any given tool are compliant with our policies. For example, if you use regular Chat GPT, it will use your interactions to further train its model, however it won’t do that if you interact with it via the APi.

How we can use AI now to help with our work? I can see many applications in my and others work but how do we access the right tools?

We made Bard generally available across BMJ. We are telling folk that if they have a business need that can be helped with using ChatGPT pro, to talk to their line managers.

Ian mentioned before we have responsibilities for what we put out. Should BMA develop an ethical position on this, some guidelines we hold ourselves accountable for? Ian (you did answer this briefly at the end of the Q&A session but it would be good to have a bit more info on this.

BMJ has an editorial policy on the use of these tools, and we are adopting the same standards for our own use, in terms of being transparent about their use. I think that this is a wide ranging debate that is going to evolve a lot over the next 12 months. It might be that as a society we introduce strict restrictions, it might be that we treat them as nothing more than very fancy autocorrect, but what happens will, I think, depend on how their use affects the markets we work in, and at the moment that is fairly unpredictable.

Classifications from OpenAI: categories:

  1. ai in the workplace
  2. productivity and efficiency
  3. data privacy and gdpr compliance
  4. ethical considerations and bias in ai
  5. opportunities for ai in the bmj (journal)
  6. steps to realize ai benefits in the bmj
  7. role of ordinary employees in ai adoption
  8. mitigating risks of confidential information leakage through ai tools
  9. accessing the right ai tools for work
  10. developing an ethical position and guidelines for ai use

About Ian Mulvany

Hi, I'm Ian - I work on academic publishing systems. You can find out more about me at mulvany.net. I'm always interested in engaging with folk on these topics, if you have made your way here don't hesitate to reach out if there is anything you want to share, discuss, or ask for help with!