When is ‘not-a-bot’ (ie human) a differentiator?

So here’s a puzzle I’ve been turning over in my head recently, and it’s an analysis that I think we will all be repeating in different ways over the next few years. 

In a world of generative AI when is it an advantage to counter position yourself as a human?

 At the moment I am producing a steady stream of cover letters for job applications.  The purpose of the cover letter is to get you through the first filter in the recruitment process – ie to get your CV moved from the bigger pile to the smaller pile. You take the position description, match it against your CV and experiences and highlight best matches as effectively as you can – to catch the eye of the person tasked with scanning a couple of hundred of the things.

 I am very aware of the fact that ChatGPT and similar offerings are available to my competition, and that it is very straight forward to prompt these tools with something like “Write a cover letter for this position description, using the experiences described in this CV”.

 So, should I be using ChatGPT to write my cover letters? Even if only to produce a first draft that I can then tinker with?

 I’m pretty sure the answer is no, but given that I am a technologist, apply for technical leadership positions, it seems like I should be able to articulate the arguments.

 The first one is sameness. I assume that the person doing the filtering is extremely well versed in spotting AI generated content. Overused wording, familiar sentence structure and stock responses to the position description requirements are going to stand out.  This gives me pause to even use a generative tool a little bit, because I would want to change every single word to make sure I wasn’t putting the reader off, so what’s the point of using a generative tool, even for a first draft?

 The second is a much more fundamentally epistemological argument. Does a generative tool have access to an advantage? If the task is to relate the position description to my background, then the tool does not know anything that I don’t. If there is a requirement is for “experience with regulatory regimes” then there is no mystery to me about the relative experience. The AI is not going to uncover something I have missed. Even more than this I have access to many things that aren’t in my CV and I am free to find a way to reference in my response – so I don’t expect the chatbot to come up with a fact that I have missed.

The ChatGPT does have access to a corpus of cover letters, so it is capable of drawing on the form and wording of this, in ways that I am not. This is absolutely a potential advantage and might be a good argument for using the tool. However ChatGPT does not have access to the success rate of these cover letters – so you would expect it to draw on how the average cover letter is worded, not how the best are put together. So the question is how do I judge word skills myself against the average. In this particular contest I judge myself (rightly or wrongly) as being able to beat the average – but I can imagine many similar situations where the average might beat my best effort.

 The third consideration is that the reader, the first filter in the process, may not be human. These days every HR system that a recruiter is likely to be using to manage the flow will offer some sort of AI system to rank inbound applications.  So, my first argument about appealing to the human reader is irrelevant.  However in this case the epistemological argument comes in:

Does ChatGPT know better than me how to write a letter that will get through an algorithm? 

 (Even phrasing this question makes me feel like I’m in a SciFi novel – I’m talking about robots talking to robots!)

At the moment the answer is no, I can’t see how a chatbot has access to this knowledge, nor an ability to reason through it, even if I added it as a stipulation in a prompt.  If I wrote a cover letter aimed a machine learning filter I’d use simple sentence structures and load it up with keywords.  I’d then take that and make it friendly and meaningful to a human reader as well – and that kind of nuanced wordsmithing is beyond current tools.

 So I am left with the conclusion that I should stay away from Generative AI for this purpose.  In fact the first argument suggests that I should lean on my humanity; make sure that my cover letters contain phrases and references that clearly signal that the author was not a bot.

 All of this is NOT an argument against ChatGPT in general, it is not a luddite treatise.  In the future businesses are going to have to repeat this analysis time and time again.

For a particular purpose what fundamental advantage can an AI give versus when is human input likely to be better.

Especially when dealing with core, value adding functions the winners are likely to be the organisations that can apply this reasoning correctly.

Previous
Previous

Meeting notes and AI Governance

Next
Next

How to learn from new starters