Friday, 28 September 2018

HR AI WTF

You may be, but probably won't be, aware that, outside of writing science and speculative fiction, I'm a fully paid-up card-carrying (quite literally) human resources professional.  There's an old maxim: write what you know.  Well, other than the occasional character who works in HR ('The Lodeon Situation') or a scene around a conference table ('Farndale's Revelation'), I've tended not to write about the world of office life generally or personnel management specifically.  Possibly because editors don't have to read beyond the first page or two of anything duller than dark matter.

Well, I thought that I'd combine these two interests in this posting, the glue that binds being the prospect of a dystopian future, in degree of awfulness somewhere between The Handmaid's Tale and Man City winning the title each season without challenge for the next thirty years.  And I'll be taking as my starting point, the Chartered Institute of Personnel and Development's People Management's July/August 2018 edition, focusing on technology and what it's going to do for us all within the profession.  Or with us, all depending.

Now, don't get me wrong: technology is not just useful - I couldn't run either my working or writing life without it, and I suspect you couldn't either (if you're a Unabomber-style off-grid hermit but also reading this, let me know how, I'm curious).  I've just pinged off a slide pack for a colleague's presentation, having picked up a draft from another colleague to polish.  We're all at our various homes, none at the client's offices.  We couldn't do this without the internet, the ubiquity of Microsoft Office, Google and the rest.  That's the prosaic everyday stuff.

But there's some developments going on at the edges that genuinely scare me.  Like ThriveMap, which uses people analytics to ensure employers select candidates that have the best cultural fit.  That's a phrase which sounds innocent, attractive even.  Why on earth would you not want cultural fit?  But I bet eugenics sounded like a similar no-brainer to many between the wars.  And I think they are not without parallels.

Yes, I know the arguments, that cultural fit doesn't mean everybody being the same sex, race, or religion.  It means looking at attitudes and propensities rather than skin colour and church of choice.  (Which begs the question, how much diversity is here?  How many companies don't want intelligent, initiative-taking team-workers, able to communicate, problem solve, and face customers?)  But that hides the fact that there are cultural nuances to communication, hierarchy and the rest.  Issues like deference: one culture's talking around the issue having been issued with instructions is another's insubordination.  Lack of eye contact doesn't always mean a lack of engagement.  I'm struggling to see how this encourages diversity rather than embeds a, say, a white, Anglo, first-world, perspective. 

And what about Olivia, your recruitment chatbot?  She'll screen, sift and longlist candidates so you don't have to.  Sounds great, but we all remember how Tay went off the rails on her first day, don't we.  Just saying...

And as for 'Put an end to harassment with the power of blockchain' (Vault), that just felt like a headline hanging off the side of a skyship in a Phil Dick novel.

Just when I thought it couldn't get any darker I came across this nugget from Nicola Strong, MD of a 'virtual learning, leadership and communication skills consultancy': "I believe that when AI is able to do the more mundane parts of our jobs for us, we'll have more work than ever."  What the fuck?  More work than ever?  What's the point of technology if it's simply going to replace nine-to-five drudgery with the need to be on-message and ready to rumble eight-to-seven?

If you've ever looked into deathbed regrets, even cursorily, you'll find that consistently in the top three is the regret of prioritising work over family, of missing the kids growing up, of not maintaining familial relationships.  Well, luckily, Ms Strong seems to be suggesting that after the march of the machines none of us will have the time to have a family.

I knew there had to be an upside.

Tuesday, 18 September 2018

Partial Recall

I re-watched Total Recall, the 1990 incarnation, the other night.  Didn't remember a damn thing about it.

Thankyouthankyouhereallweekdon'tforgettotipyourwaitress...

Sorry.  That came out more like a tired sub-Vegas stand-up routine than it was intended to.  I meant what I said literally: I re-watched fresh faced Arnie finding out who he was and what Mars meant to him, knowing that I'd seen it before, but not having the slightest idea what was going to happen next, none of the scenes ringing the faintest bell.

When I really, really wring my brain out to establish what little there is in the drawer marked 'Total Recall', all I can muster up is a sort of warm memory of enjoying it.  It makes me wonder what sort of memory that is, whether it even counts as a memory.  It's more meta than that, a recollection of an opinion of an experience, a footprint in the dust from which I extrapolate where I've been.

I've always found philosophers' analogies for how the mind works unsatisfactory.  We used to be told that the mind worked like a library; nowadays it's like a computer.  Whatever the technology of the day, it seems to boil down to a big bucket of black and white data that we can dip into.  And that seems to sit very uncomfortably with the merry dance my neurons had been engaged in.

On the library analogy, the way I've always seen it presented is that we (our soul? our essence? our ego? what exactly?) are prowling the shelves, pulling down tomes and verifying facts.  It strikes me that that's fundamentally flawed.  For a start, how do you account for the difficulty, the uncomfortable feeling, at times the impossibility of holding views and opinions that don't quite mesh?  For that matter, how do account for views and opinions at all?

If there's any mileage in the analogy, I think we are the library, and it's the library itself that is opening volumes, bringing the knowledge to the fore, but with all the other stuff in the background.  But we're also the librarian, having a say over what makes it onto the shelves, making sure that they have an editorial stance that is us.  But that all feels like I'm trying to make something fit that was never intended to.

The computer analogy does little other than reduce books to ebooks; same analogy, different technology.  Neural nets appears to offer better models for learning, but less so for memories and knowledge - or intelligence and consciousness overall.  What exactly are at the junctions in the net in those cases?  Or is the net, in effect, your entire personality, memories, attitudes and aptitudes.  If I'm a racist who's good at needlework, is it my neural net that pulls me towards doing a damn fine quilt.  Just with a swastika in the middle.

Sci-fi has brought the absurdity of the mind as a box of facts to the fore many times over with computer-says-no logic engines like Spock and Data.  After all, an orrery is not the universe (I'd like to see that on a t-shirt, please).*

When Babbage thought up his difference engine, one of the key controversies was putting what God had put into Man and Man alone - the ability to reason - into a machine.  Where did that leave us?  Where was our special status?  I think that worry missed a fundamental.  Calculations are actually the easy bit, the - pun intended - mechanical bit.  The grey area is doubting, misremembering, having an uncanny feeling about, mild bigotry, Machiavellian scheming and the rest.

Whatever model works for all of that I'm sure of two things - it won't be a box of facts, and we we're nowhere near stumbling on it.

* Yes, yes, I know that an orrery is a model of the solar system, not the universe, but somehow that doesn't have the same ring.

page_images/2084-99.jpg