How AI is creating a trust and ethics gap on LinkedIn

In 2023 we are witnessing a whole new AI world.   I have deep concerns how AI is creating a trust and ethics gap on LinkedIn.   The celebration of the platform’s 20th anniversary this year coincided with the explosion of ChatGPT and OpenAI, of which LinkedIn’s owner Microsoft has invested $10 Billion.  

LinkedIn as the worlds largest professional platform’s  mission statement is ‘connect the world’s professionals to make them more productive and successful’.    The core value proposition is for members to engage in genuine conversations, communities and learnings to develop economic opportunities.    

With over  985 million global and 14 million Australian members, the nucleus for success starts with trust, And whilst there are over 63 million registered companies on the platform, the hub of connection and conversations comes from individual members.

Let me say upfront, that I am a huge advocate of LinkedIn and embrace change and many elements of AI.  I report and write regularly in blogs and media on the good, bad, ugly and brilliant.   But recent LinkedIn and external AI developments raise questions of trust and ethics which need scrutiny.

State of Trust &  Play

Trust is defined as a firm belief in the reliability, truth, or ability of someone or something. 

But social media fares poorly in the trust stakes, with the 2023 Edelman Trust Barometer Report finding it’s the least trusted industry sector at 44%.    

Other global surveys rate LinkedIn as the most trusted of all platforms.  And like every other social media, the clamber for influence, visibility and revenue brings out the unscrupulous alongside the ethical with fine lines in-between.

Fake profiles, paid followers and engagement, plug in automation sequences and other sly practices by members are equally prevalent on LinkedIn. I refer to these ploys as the Underbelly of LinkedIn.

Watch my Ticker interview here

Generative AI and ChatGPT will, and is accelerating the scope of manipulation and fake influence.

AI Manipulation 

Content and engagement/comments are key pillars for success but are being compromised with multiple culpabilities. 

As expected, LinkedIn is rapidly integrating a wide range of GPT-powered features.  One recent feature is an AI post generator feature was launched last month by Keren Baruch, Director of Product.

Whilst still in testing and roll out mode, by using a 30 word prompt, members instantly creates AI posts of opinions, advice and learnings.

This will encourage misrepresentation of members’ expertise, a decline in authentic content and a maelstrom of duplicated banality. 

LinkedIn is rolling out new AI tools for recruiters, job seekers and members at a rapid rate.  Whilst many will have value for many, others are definitely cause of concern and manipulation.  The recruitment concern is the inflation of a persons skills via Generative AI at the front end. 

Automated Comment Apps 

But more unsettling, is the deluge and increasing uptake of generative AI commenting apps. AI talking to AI is a threat for trust, safety and genuine relationships.

Examples of Chrome 3rd party commenting  apps include Engage AI;  BrandEngine.ai,  Taplio, Phantom Buster, Tappy and  PowerIn.     Many of these apps are integrated with services selling click farm and automated engagement pods .

This is particularly concerning as it’s a double whammy of unethical behaviour.  

Most apps have a pernicious option for comments to be a warm across to disruptive  tone. Even, lets argue.

Many sellers of these services have a LinkedIn presence and flagrantly promote their ‘prohibited’ services, of which LinkedIn seemingly totally overlook.  

LinkedIn’s Policies

The User Agreement expressly prohibits the use of third-party software, extensions, bots and browser plug-ins of which generative AI comment apps and tools sit. 

The Professional Community Policies  further state that members must make an effort to create original content, respond authentically to others’ content and not falsify information about themselves.

Legal Considerations

I asked Dr Fabian Horton, Lawyer & Chair, Australasian Cyber Law Institute to provide a legal lens.  He shares Tracey Spicer’s concerns of the embedded biases in AI generated content.

He advised there are many matters to consider including data protection and privacy, intellectual property,  issues stemming from discriminatory or biased content and particularly, misleading or false information.  

The Australian Consumer Law protects against conduct that is misleading or deceptive or likely to mislead or deceive.  Untruthful or inflated claims about products or services could run afoul of commercial or consumer laws. 

Dr Horton cautions people to review and ensure AI generated content does not lead to legal ramifications in the areas of defamation, bullying or hate speech.   

Whilst LinkedIn supplies automated authoring tools, users are still liable for the content that is posted under their profile.  Users should familiarise themselves with LinkedIn’s Terms of Service and the relevant laws in their jurisdiction to ensure they are not breaching contractual obligations or any civil or criminal laws.  

Opinions of a few industry leaders

I asked a few industry leaders who are prolific LinkedIn users to share their thoughts on these AI developments:

Mark Ritson, Founder Mini MBA Marketing

“Stupid, stupid idea.  The whole point of LinkedIn is connecting to other professional people and their genuine thoughts and actual situation.

Allowing, never mind encouraging users to allow computers to flood the app with artificial, spooky, dumb, expected spam is a massive punch in the company’s own head.  Whoever made this call is a moron.”

 Tracey Spicer, Author, MAN-MADE – How the bias of the past is being built into the future

“This is a concerning move. ChatGPT has the potential to reduce the burden of menial tasks in the workplace.   However, it’s deeply flawed and filled with bias and prejudice.

 As users, we need to focus our critical thinking skills on de-biasing this tool, as much as possible. However, the tech giants are expecting us to do this for free, as unpaid labour. Big Tech needs to take bias and equity seriously, before unleashing this technology on the general public.”

 Alice Almeida, MD, Almeida Insights

“I am not comfortable with AI/ChatGPT being used for content creation or generating comments on LinkedIn without a disclaimer of use.  As the human element is part of the LinkedIn experience, I want to know that the person who wrote a post or commented on mine is just that – a person.   

 I go to LinkedIn to learn from people’s experience and knowledge.  I want to know that challenging my opinion has come from their own mind and not from something they entered in ChatGPT.   Keep it genuine and authentic, or else future meetings in person may be a major disappointment to the other person.”

 Tom Goodwin, Digital Transformation Consultant & Author

 “These days everyone wants to build a personal brand and get heard but nobody wants to spend the time or energy to think, so as such we’ve all manner of content with zero effort required.

People fail to realise the only metric that counts isn’t impressions, but the impression your comment leaves. Which tends to be nothing unless a human could be bothered to have an opinion.

One can see this as a way to ensure new voices and ideas get spread by those less confident or practiced in writing.  Or, one can see it as a way to jam up the feed with even more lowest common denominator averageness to wade through.

Final Wrap on Trust

At every touchpoint humans drive decisions, their personal brands and ethics compass.  

In a world of relentless change and turmoil, trust and ethics should be the currency of professional sustainability and success, both on and off LinkedIn.   

Note: a version of this article was also published in Mumbrella in July 2023

 

Posted in

Like to know more?

Then get in touch with Sue Parker via your preferred method of email or mobile 

css.php