Internal Communication is Failing
In the mid-1980s, the toy company Mattel held a quarterly town hall for managers and above. Those employees below the manager level were excluded for purely logistical reasons: the largest room at the Mattel headquarters campus wasn’t large enough to accommodate all local employees, and there was no appetite to spend the money on a hotel ballroom.
As communications director, I knew that the distribution of information from those attending town halls to their teams was uneven at best. I also knew a lot of those lower-level employees felt disenfranchised. I recommended that we hold the town hall meeting twice. Leadership bought into the idea, scheduling the first session for managers and above, and the second for everyone else.
The first time those non-managers attended the town hall, they listened raptly as the CEO delivered his updates and analyses. After the meeting, a group of administrative assistants approached him. “What did you think?” he asked.
One of them said, “This was wonderful. We now understand a lot of things that are happening around here that we didn’t understand before. We just have one question,” she said. “Who are you?”
The CEO had not introduced himself, assuming everyone knew who he was. For those lower-level employees who had little contact with leadership, this was a flawed assumption.
Today, that CEO would be less nonplussed than he was 40 years ago. A new survey of 7,550 workers from Workvivo revealed that 46 percent of frontline workers don’t know who their CEO is. Worse, 87 percent said the company’s culture does not apply to them, indicating that the culture of the frontline differs from the culture the company projects in its messaging.
It would be bad enough if that data point existed in isolation.
It does not. Most of the data points to massive systemic failures of internal communication across all industries.
The COVID-19 pandemic led many leaders to lean on their internal communicators. That led to a lot of those communicators taking a victory lap, congratulating themselves on their newfound respect from the C-suite. The pandemic has been in the rearview mirror for some time now. Internal communicators who believe leaders have maintained the same level of confidence may want to examine the data. While leaders still consider internal communications necessary, they no longer view it as a top priority. In their minds, it has become just another component of employee experience strategies.
A torrent of research shows that communication with employees isn’t working well at all.
The Trust Gap: Leaders Think They’re Connecting—But They’re Not
If you ask most executives, they’ll tell you their people are aligned, energized, and thriving. But step down a rung or two, and the view changes dramatically. Leaders think they’re broadcasting clarity; employees feel like they’re stuck in a fog. The discrepancy is laughable: 81 percent of leaders believe their employees are motivated and engaged, yet only 52 percent of employees agree. Almost all executives (95 percent) believe their workforce thinks leadership cares about employee well-being. Only half the workforce agrees. These aren’t minor misalignments. They are tectonic shifts in perception. And if internal communication is supposed to be the bridge, it’s failing under the weight of executive overconfidence and employee disillusionment.
Trust, already brittle in many organizations, is eroding fast. In just two years, employee trust in business leaders dropped from 80 percent to 69 percent, according to Mercer. Gallup rubs salt in the wound: Only 23 percent of employees strongly agree they trust their leadership at all. The view from the front lines is even bleaker, with the Edelman Trust Barometer showing that executives are 2.5 times more likely than frontline workers to believe their CEO is being truthful. This is a communication failure of the highest order.
Alignment Has Left the Building
For years, communicators have discussed alignment as if it were a corporate North Star, an essential ingredient for any functioning culture.
Zora Artis and Wayne Aspland define alignment as the deliberate process of connecting employees to strategy through a shared understanding of purpose, values, and goals, creating clarity, engagement, and coordinated action. Their research emphasizes that alignment isn’t just about cascading information, but also about enabling meaning and ownership at every level of the organization.
And yet, here we are: 77 percent of executives say their companies aren’t aligning employee goals with the organization’s purpose. Dwell on that for a moment: Most of the C-suite knows they’re not doing it, and either they don’t know how or they don’t prioritize it. But what, exactly, is internal communication supposed to be doing if not this?
This failure shows up in subtle ways. When employees feel disconnected from strategy, they retreat into silos, doing the work without understanding the “why.” That kind of culture is fertile ground for disengagement, cynicism, and eventually, resignation. It is no surprise, then, that more than half of American workers are watching for or actively seeking a new job, matching record highs. And if you think this isn’t connected to communication, think again: 61 percent of those employees considering a move cite poor internal communication as a contributing factor, and over a quarter blame poor communication outright.
The Efficiency Drain That No One Is Fixing
Too often, the cost of poor communication is couched in abstract terms focused on morale and engagement. New research helps us see it in terms of cold, hard cash. Axios HQ reports that the average employee loses more than 41 workdays a year to bad communication. That’s over eight weeks of productivity gone, vaporized by vague directives, information silos, and endless clarification loops. For a knowledge worker making $100,000 to $150,000 annually, that translates to nearly $20,000 in wasted salary per person. Now multiply that across a department, or an entire company.
It doesn’t stop at lost time. Communication breakdowns have become part of the daily experience for employees. According to research from Grammarly and The Harris Poll, 100 percent—yes, every single one—of knowledge workers report miscommunications at least weekly. A quarter say it happens multiple times a day. This isn’t just annoying; it’s corrosive. It wears down satisfaction, erodes morale, and adds fuel to the burnout fire. Poor communication isn’t just a glitch in the system. It IS the system. Too many internal communicators wouldn’t even consider that part of their remit. Leaders reviewing these results in comparison to their spending on internal communication may have different perspectives.
Feedback Loops That Go Nowhere
If your company asks for feedback but doesn’t follow up, you’re not listening—you’re collecting noise. Employees notice. A Visier survey found that the top thing employees want after completing a survey isn’t more perks or pizza parties. It’s communication: an update on what’s being done in response to their feedback. When that loop isn’t closed, the message is deafening: “We heard you, but we don’t care enough to act, or to tell you what actions we’re taking.”
The situation becomes especially grim when communications teams aren’t even positioned to help. A study from Weber Shandwick found that only 17 percent of CEOs believe their communication and public affairs teams are well-equipped to handle today’s rapid-fire challenges, whether economic, geopolitical, or cultural. If comms can’t rise to the moment, who will? And if the function doesn’t get the resources or influence to meet the moment, what message does that send to employees about how much they’re valued, or how seriously leadership takes trust, transparency, and connection?
The Shifting CEO Message
Executive priorities have shifted, and the shift occurred without much input from communicators. Axios reports that, as of March, leaders have shifted from an empathetic, reassuring tone during the pandemic to one focused on productivity and accountability. As Axios put it, “CEOs communicating with employees has evolved from ‘bring your best self to work’ to ‘step it up.’”
Other executive trends have included:
Return-to-office mandates — Surveys reveal a stark divide in perception. Full-time mandates among Fortune 500 firms nearly doubled from 13 percent to 24 percent since the fourth quarter of 2024. But 77 percent of employees feel these mandates reflect a lack of trust, while only 39 percent agree that they boost productivity. Since leaders seem to believe that presence equals performance, that friction intensifies. In fact, 35 percent of employees say they’d consider quitting if forced to return to full-time office work. That number grows to 40 percent for Millennials and Gen Z, the demographics that dominate the workforce.
Abandonment of DEI goals — The rollback of DEI initiatives and goals to accommodate the Trump Administration’s executive orders doesn’t sit well with most employees. Seventy-eight percent say it’s very important that their organization remains inclusive. Reports of employees unhappy with companies scaling back their DEI programs are common. Disney employees, for example, have expressed displeasure, and at major firms (such as Accenture, Skadden, and Kirkland & Ellis, among others), employees have resigned in protest, framing the moves as an “authoritarian” infringement on inclusive values.
Layoffs — The shift from a seller’s to a buyer’s market has led to a surge in corporate layoffs. As a result, roughly one-third of American workers report experiencing “layoff anxiety” this year. Sixty-five percent of those who survived recent layoffs say they’re now worried about their own job security, compared to just 24 percent at more stable companies.
Employee confidence has plunged to a record low, with Glassdoor reporting that mid-level employee confidence has dropped to 47 percent, the lowest level since the company began tracking in 2016.
This has led the vast majority of employees (69 percent) to prioritize job security over career growth.
But We Publish An Award-Winning Email Newsletter!
These and other widespread symptoms of disengagement, as well as the specific areas of misalignment, all point to a single epicenter: a systemic breakdown in internal communication. This is not merely a contributing factor; it is the core mechanism through which the disconnect is created and sustained. The failure is not a lack of messages, but a lack of connection, clarity, and trust.
Does all this sound like internal communicators should be patting each other on the back for a job well done?
The current state of things should be a clarion call for internal communicators. Corporate journalism remains important. For various reasons, employees need to be informed about what’s happening. As Mitchell Stephens pointed out in A History of News, “Organizations depend for their unity and coherence on a sense of group identity. To think a society’s thoughts is to belong to that society. News provides the requisite set of shared thoughts.”
However, ensuring news and information travel between leaders and employees is just table stakes. If it is all an internal communication team does, we end up in the circumstances we face today.
Many internal communicators will argue that many of these issues are HR’s problem, or that their remit does not extend beyond coordinating messaging. But if internal communicators are not addressing the various ways the company communicates internally—employee-to-employee, department-to-department, and the messages sent by its processes, among others—who will? Who else has the expertise to identify communication bottlenecks and challenges and strategize solutions?
The current state of things is unsustainable. The concept of internal communication needs to expand or risk irrelevance.
FIR #469: Is Internal Communication Failing?
-
FIR #469: Is Internal Communication Failing?
A growing body of research suggests employees are more disconnected than ever. What are internal communication teams getting wrong? Also in this long-form monthly episode for June 2025:
- Buzzstream interviewed over 150 digital PR pros to assess the state of digital PR. It looks a lot like it did five years ago.
- Social media has overtaken television as Americans’ primary source of news.
- Chief Communication Officers are in a precarious position, expected to anticipate and address political and societal upheaval, often sharing information executives don’t want to hear.
- Pope Leo XIV has called for an ethical AI framework in a message to tech execs gathering at the Vatican.
In his Tech Report, Dan York looks at Mastodon’s updated terms prohibiting AI model training, announcements from TwitchCon, and the impact of Texas’s mandatory age verification law on Internet privacy and security.
Links from this episode:
- Study: CCOs Take On Growing Political Risk
- Work Schedules Fail Millions of U.S. Employees
- Breaking Down the Infinite Workday
- Creators Turn to Agentic AI to Manage Fan Engagement
Links from Dan York’s Tech Report:
The next monthly, long-form episode of FIR will drop on Monday, July 28.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email .(JavaScript must be enabled to view this email address).
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
@nevillehobson (00:02)
Hi everyone and welcome to episode 469 of Four Immediate Release, the monthly long form episode for June 2025. I’m Neville Hobson in the UK.
Shel Holtz (00:13)
I’m Shel Holtz in Concord, California in the U.S. We’re very happy that you have joined us for our monthly review of what’s going on in the communications slash technology space. And there is always a lot going on, always. And I heard about a lot of it recently. I was at the IABC World Conference in Vancouver, small conference, only about 600 people, I think. There are…
Definitely some challenges facing the world of associations in general and IABC in particular. But as usual, the content at the conference was excellent. There were some really good sessions on things like driving AI adoption in the organization presented by ProSci, the change management research organization with some really revealing data, some very interesting stuff. For example, Neville, the
number one driver of adoption of AI in an organization is the very visible and vocal support from the most senior leadership of the organization. That’s the top factor. And in a lot of organizations, those guys don’t have a clue what this is or how it
@nevillehobson (01:18)
big surprise there.
Opportunity for communicators, would say that signifies Shell.
Shel Holtz (01:29)
It absolutely is. So we have these topics that we are going to jump into here shortly, but Neville, first, why don’t you remind everybody what we have already posted since our last monthly episode.
@nevillehobson (01:43)
Indeed, some good discussion we had on a handful of topics since the last month of show that was 466 on published on May the 26th. And we led in that one gain AI features. No surprise to anyone, I suppose, in every single episode we’ve been doing throughout this year, I think really.
But we started with the topic on AI. Not only are AI chatbots still hallucinating, we said by some accounts it’s getting worse. And we had a conversation about LLMs and hallucinating and so forth also in that episode. A handful of other topics too, including one I’ve been reading even a more about in the past week or so. So Google’s new tool for making AI videos with sound following the one with text, that’s VO3.
these seem to be coming out of the woodwork from a variety of players mid journey, most recently this past few days. So expect to hear us talking about it on FIR during the course of July, I think.
Shel Holtz (02:37)
Yeah, I don’t know if you’re aware, I was hearing about this on another podcast that these VO videos are being strung together with themes and shared on TikTok and they’re going viral. I can’t remember what the themes are, but they’re kind of silly and fun. But yeah, the VO3 has really led to this explosion of these videos being shared.
@nevillehobson (02:51)
Yeah, there’s a of that. A lot of that.
Yeah.
There’s around a dozen such tools currently, according to who was at the verge, if I recall correctly. And I’ve heard of half of them. So new things are appearing left, right and center. The mid journey one just a quick aside, I coughed up some money just so could try it. Blimey, I tell you, this is extraordinary. That you upload a static image and it creates a five second video from that you just prompted a bit.
or not as you as you prefer it’ll do something. And I’ve done a poor about half a dozen of these that I’m going to stretch together into a single video I saw a couple on LinkedIn to people doing similar things. So for 10 bucks a month, it’s worth it to discover what this can do. So expect to see lots of silly stuff out there. But there’s a great learning for what everyone else is doing. So it’s definitely another phase in these tools.
Shel Holtz (03:43)
Yeah.
I have a Mid Journey account. haven’t tried that yet, but you’ve been able to do that on PyCo, which I’ve been paying for for a while. So it’d be interesting to see how it works on Mid Journey. Yeah.
@nevillehobson (03:50)
Give it a shot.
Sure, there are a number of tools that you could do that.
This one I’m seeing in the tech press is saying, wow, over this particular one. So it’s offering something, I suppose. Go and give it a shot. So we also talked in this episode, this is a bit of a kind of a roundabout way to get to what we talked about in the last monthly. We talked about a new global alliance report on lack of strategic leadership about AI’s ethical use. AI again.
Shel Holtz (04:01)
Yeah, I’ll have to go give it a try.
@nevillehobson (04:18)
and a few other topics, plus Dan York’s tech report about a number of services online shutting down and other new ones starting up. So pretty full episode that came in at 104 minutes. No, wait, one hour 43. What’s that? Yeah, 100 and whatever. Anyway, one hour 43. So nearly an hour and three quarters. Yeah. No, it’s not an hour and three quarters almost. So that’s a hefty but good one, Donna. Thank you for that. So
Shel Holtz (04:33)
Yeah. We’re communicators. Math is not our strong suit.
@nevillehobson (04:45)
But that was that one. Since then, we talked in 467, June the 5th, that was Mary Meeker’s Trends Report on AI. Mary Meeker, many of you will know this, venture capitalist, and former Wall Street securities analyst, best known for the annual Internet Trends Reports that she used to publish a decade ago and going back into the 2000s. Serious credibility. But she released a new one.
dedicated entirely to AI, 304 slides, not the most slides she’s had as a deck. One of her internet ones was 600 slides, substantial content. But this is worth a read. We talked about it. She has credibility, as we said in the show, credibility as strong as hers is likely that this report will become the defining source of truth about the state of AI. So it’s definitely worth taking a look at the report and
listen to that episode to get our take in what she had to say. And then finally, 468 published June 17th, new threats to reputation. We said, while a company’s reputation doesn’t appear as a line item on a profit and loss statement on a balance sheet, it is nevertheless a critical intangible asset that significantly influences financial performance and long term success. So in this episode, we looked at some recent research.
and reports to zero in on the newest reputation challenges and how communicators should face them. So you’re up to date now with that little wrap up.
Shel Holtz (06:12)
We also had an interview drop.
@nevillehobson (06:15)
We did. Yes, we did. That was a really good conversation we had with Craig Silverman. We’ve interviewed Craig twice before on this episode, but you’ve got to go back to 2008, 2012 to get those interviews. So well over a decade ago. And here’s Craig. We talked to him about Indicator, his new venture that is all to do with fighting digital deception.
and he explains how he does all that. He explained how indicator came to be the challenge of launching a media startup and what kind of impact he hopes to achieve. He also shares practical insights for communicators facing the growing threat of coordinated inauthentic behavior, fake reviews, and AI generated information very timely. That was a good conversation. Almost three quarters of an hour, we talked to Craig about that and it was some really good insights he shared. So very much worth a listen.
Shel Holtz (07:10)
could have gone on longer. I had questions when we wrapped up. But yeah, Craig is a journalist, trained journalist, and had spent five years at ProPublica reporting on disinformation and misinformation. So was a logical step to move into this independent journalism that he’s doing with his partner. So yeah, definitely worth a listen.
@nevillehobson (07:12)
It could have. Me too.
Right. And you might,
if you know of Craig, you might remember back 15 years ago, he published a website that was called Regret the Error, pointing out errors made in media reporting that led to a book deal. And I’ve got the book. It’s nice, a nice look back in time to see what that was all about. But that was a good conversation we have with Craig, must admit.
Shel Holtz (07:47)
Me too.
Yeah
Also published since last month is episode 117 of Circle of Fellows, the monthly panel discussion with IABC fellows and a moderator, also a fellow, usually me, sometimes Brad Whitworth, talking about a topic of interest to communicators. This one was different. We did this one live at the IABC World Conference. We had…
three of the five new fellows up on stage. The other two weren’t able to make it. And then we had eight fellows in the front row of the audience. So we had a camera aimed at the stage. I was at the lectern and the three fellows in chairs. And then Brad was out in the audience with a microphone and his wife, Peg Champion, was following him around with a camera.
And all of this was feeding into StreamYard, which we used to do Circle of Fellows. And I was able to do the camera switching seamlessly. And this was all questions from the audience. So it wasn’t on a single topic. We went an hour talking about issues that were on the minds of communicators. It’s really interesting episodes. So that’s available both as a podcast and a YouTube video.
We’re also preparing for episode number 118, returning to the usual format. This one’s on communication leadership. The panelists include one of our brand new fellows, Mike Klein, along with Robin McCaslin, Sue Heumann, and Russell Grossman. This will be at noon Eastern time on Thursday, July 17th. So if you’re interested in hearing the perspectives of some senior communicators on leadership and communication,
Tune into that or catch the video or audio replay later. And with that, it’s time to turn to our reports as soon as we pay these bills.
There was a time when digital was something you bolted on to your PR efforts. Neville, you undoubtedly remember those times where should we do something digital? Should we have a website to go with this? I remember when TV commercials had URLs appearing at the bottom and it was, wow, look at that. They’re showing their URL on a TV commercial. PR now is digital. mean, calling it digital PR is almost ridiculous.
It’s just at the center of how we communicate. And BuzzStream’s latest state of digital PR survey is out. And if you’re wondering where the industry is headed, this year’s survey pulled in answers from 150 digital PR pros across the globe. I guess that means there are PR pros who are not digital PR pros, which is a little worrisome, but there’s a lot of food for thought here. So let’s start with the basics.
What’s working in digital PR these days? The clear winners are data-driven hero campaigns and good old fashioned expert commentary. It turns out about 95 % of the professionals out there lean on these two tactics. You need both the big attention grabbing home run campaigns and the steady reliable singles. And Neville, I apologize for the baseball metaphors. I don’t know the equivalence in cricket. ⁓
@nevillehobson (11:14)
No, that’s okay still because I probably don’t either, so that’s fine.
Shel Holtz (11:18)
Okay, I should have gone for football so you could have done rugby, right? It’s always nice though to see that stats back up what so many of us already are doing and just feel intuitively is the approach that works. Almost half of respondents say digital PR is actually more effective than it was a year ago. More links, more visibility, better results. But, there’s always a but.
72 % also say it’s gotten more challenging at the same time. If that feels like a paradox, it is. Blame it on everything from industry layoffs to Google’s never-ending algorithm updates to the growing army of competitors in the digital space. Basically, the pressure cooker has been turned up to 11. Now, what about budgets? It’s not exactly a free-spending landscape. Most digital PR teams are working with less than $10,000 a month and
Only a handful, about 4%, have more than $20,000 to play with. The cost per link, which is how a lot of these teams still measure value, typically stays under $750. Here’s something interesting. A full quarter of respondents are generating 40 or more links per month. If you’re into link building, that’s a pretty solid haul for your money. And interestingly,
link building is still at the heart of most of these digital PR campaigns. So what does success look like in digital PR? It is still all about the links. Not just any links, quality links are more important than they’ve ever been with 87 % of PR pros saying that’s their number one metric. Tools like RF’s domain rating and Moz’s domain authority are the go-to yardsticks for measuring those links.
And when it comes to relevance, two thirds of practitioners say they check the page title when the link appears. Little detail, sure, but one that says a lot about the evolution of the craft. Patience remains a virtue. Around half of those surveyed say it takes three to six months to see meaningful results from a digital PR campaign. For some, it’s even longer, think six to eight months before you really start to notice the uptick in authority or referral traffic. If you’re in a hurry,
Digital PR probably isn’t for you. Follow-up emails deserve a quick mention here. A massive 98 % of respondents say they send at least one follow-up, and the data shows that it pays off. Sending a follow-up boosts your reply rate by 85%. So consider that a best practice. The best results come when you follow up within a day. Open and reply rates both peak right after the first message. Now here’s why all this matters.
Digital PR isn’t just about backlinks anymore. It’s about driving organic traffic, raising brand visibility, sparking social buzz, and even helping organizations weather a crisis. Done right, digital PR delivers a kind of surround sound effect for your organization. One campaign, multiple touch points. The big takeaway in 2025 is that digital PR is harder than ever, but also more rewarding.
It’s also about mixing hero campaigns and expert commentary, following up quickly, measuring what matters and above all, being patient. Because if there’s one thing this year survey makes clear, it’s that digital PR is a marathon. It’s not a sprint. The other thing that occurs to me, Neville, and I think where we’re probably going to end up talking, is it’s all still about referral traffic to drive folks to a website. And we know that’s on the decline because of AI. And I was…
really struck that they’re still talking about success in terms of backlinks and not a word about showing up in AI search results. So Neville, what was your take on this study?
@nevillehobson (15:15)
probably mirrors much of what you’ve said, although I have to say I got really down a rabbit hole at the very start where it’s saying where I’m saying, why are we calling it digital PR, particularly if the definition that I’ve seen all over the place, including an organization called Digital Marketing Institute, that
It’s PR, right? And you talk about digital channels, isn’t that a bit of a misnomer now, because everything’s digital. If it’s defined by the channel, that makes less sense to me, even more so. So I think in the report early on, they asked, they have a little section called expert opinions, a little drop down, where one of the questions at the start was, how does digital PR compare to traditional PR?
And the quote I liked, and you’ll understand why in a second, is from Will Hobson, hi Will, US VP of PR, Rise at Seven. He says, the lines are getting more blurred, but in my opinion, digital PR is just PR. Our activity needs to be brand relevant, but also culturally relevant while being closely tied to business objectives. Now, you can apply that to PR, and I agree. So we haven’t moved on from…
not calling it digital PR, which emerged when all this was kind of new about 15 or so years ago, where we had digital PR. And I always had a problem as well with digital marketing, where you slap the word digital in front of a job description or a job title or some kind of activity, and it sounds super cool and new and fresh and amazing. We need to stop doing that, because if you then look at these definitions, so the Digital Marketing Institute says,
Digital public relations is a strategy used to increase awareness and visibility of your brand using online channels. That’s the first part of it, to which I would say, but isn’t that what PR does? Let’s call it traditional PR for differentiation. Isn’t that what PR does? Digital PR is similar to traditional PR, they say, but it offers the opportunity to reach more people in a measurable and targeted way. I don’t know what that means, but that doesn’t make sense to me either.
I’m not going hang up on this because I’m not, but it just struck me is that we’ve to stop calling it digital PR. I think your point, though, to kind of focus on this major issue is that exact one about links driving traffic to websites and so forth. I did think that they had the report show some interesting aspects related to SEO that are very much in the the dane of
domain of this is how we’ve always been doing this. This is not new. So that makes sense to me. The syndication, no follow, I found interesting. But I guess the main point is, though, if we’re going to call it Digital PR for the purposes of this article, I’m OK with that. When you get into some of the kind of slicing and dicing of what they came up with, which teams do you work with more closely if you’re in Digital PR?
And that I didn’t find surprising that the number one by huge number was SEO, the folks who do SEO, followed by marketing and then PR. So traditional PR is third on your list of people you work closely with. Surprised me a bit to see in this result that strategy was way down the list. And I would have thought that if you’re gonna, know, surely we’re talking about being strategic.
to, well, not to coin a phrase, of course, but I hear that all the time. But I would have thought that would have been higher. And it, you know, I could slice and dice this, but I don’t think that would add to our conversation. I think there are things we can learn from this survey, without doubt. But to me, it was obscured by this thing about digital marketing. And I think things are moving so fast that the kind of feeling I get from some of this
is that this is not on top of these changes that are moving fast. And I’m thinking in particular about what you and I have talked about in a variety of episodes of this podcast over the course of this year on things like Google Overviews, the role of AI in all of this that is going to interfere with all of these traditional sounding plans, it seems to me. So the future, according to this, to my mind, doesn’t look very rosy as changes upon us. And this doesn’t look like it’s addressing change.
Shel Holtz (19:18)
Yeah, I don’t see them making any pivots here to get ahead of this. And one of the things that one of the speakers at the IABC conference said, I mean, it’s an old line. He just sort of changed the words. He said, when change is coming at you, the best companies start running. And you don’t have to be faster than the change. You just have to be faster than your competitors.
The old line being when the bear is coming at you, you run, you don’t have to be faster than the bear, just faster than the other campers, right? ⁓ But as I think about the term digital PR, I guess I can see the distinction in the respect of PR as being a reputation management and relationship building activity.
@nevillehobson (19:47)
Ha ha ha ha.
Shel Holtz (20:06)
I spend a lot of time on the phone with people, which is not digital. There are PR people, chief communication officers, for example, executive communicators who are coaching their leaders to prepare them for delivering testimony before Congress or preparing them to make a pitch to a city council or a zoning board. There’s a lot of PR that goes on that isn’t digital.
I think what we’re talking about with this is outreach, right? And when we’re trying to get our message out, so PR messaging is all digital these days, but there’s a lot of relationship building and reputation building that doesn’t happen online. It happens over the phone, it happens face to face. So I guess we could say that’s the distinction.
@nevillehobson (20:57)
Yeah, but you got to bear in mind one thing. So if you’re a smartphone, which is digital, then this digital PR, okay, digital outreach is what you’re doing. No, I mean, seriously, this one of the numbers here, again, not to belabor this point, because this could be a whole separate discussion all by itself. But the number one tactic in the in the in the report that we’re discussing, which of the following tactics you consider to be part of digital PR?
Shel Holtz (21:03)
Yeah
@nevillehobson (21:21)
The number one, 99.4 % of people said, pitching data-led content. So it got me thinking. But that to me is crazy because whatever you’re doing in public relations, when you slap a word like traditional or digital in front of it, you are invariably going to be pitching data-led content or data-driven content, whatever. You’ve used data, or rather you have data, and you have used tools to extract meaning from that data.
leaves your pitch. So these kind of narrow definitions to me are obscuring the value of these activities and dressing them up with a word that is wholly unnecessary. So Will Hobson’s got my vote where he says he doesn’t think this is, we should not call it that, we just call it PR.
Shel Holtz (22:05)
Yeah,
I don’t disagree. I am thinking back to an old, old case study. This was when, I can’t remember who was behind it, but there was a call to boycott the tuna industry, the canned tuna, because of the inadvertent dolphin catch that was happening. were scooping up dolphins in the nets and dolphins were dying.
@nevillehobson (22:10)
Ha ha ha!
Shel Holtz (22:30)
and they were just throwing them overboard because all they wanted was the tuna. And StarKissed objected, and I think it was Burson Marsteller that they hired. And Burson Marsteller got the StarKissed people together with the people who were behind the boycott. And StarKissed said, look, we’re already doing all kinds of things to prevent dolphins from being caught up.
in the sweep of tuna. Look at our numbers, look at our tactics, the things that we have implemented as procedures to avoid this. And the group came back and said, okay, yeah. And they went out and said, boycott tuna, except StarKiss, they’re already good guys. That was negotiation. That was getting people at the table. So today, communicating the outcome of that would clearly be digital, but the actual effort
@nevillehobson (23:12)
You
Shel Holtz (23:21)
was getting people together at a table to hash things out. That’s still PR.
@nevillehobson (23:27)
So you just defined why we shouldn’t be differentiating it, because that sounds totally confusion to the activity. It’s all just PR, it’s relationship building. These are methods you use to get your message out or engage with someone or whatever it might be. It doesn’t define the activity itself. Indeed, it talks about which channel. it channel if you wanted to say it’s that?
Shel Holtz (23:33)
It’s all just PR.
@nevillehobson (23:51)
But it doesn’t help any at all, in my opinion. I would argue that you could apply the digital advertising, digital marketing, digital whatever. It is not helpful. So I’m we agree on that, Shell. And I thank Will Hobson for prompting this part of our discussion on this podcast. Hope you’re a listener, So let’s see. This is a good digital story, this one, Shell Ethic.
Social media overtakes TV as the main source of news in the US.
Shel Holtz (24:16)
Do we need to call
it social media? It’s all just media. ⁓ just…
@nevillehobson (24:19)
Well, this is another conversation,
right? I I’m as guilty as everyone for calling it social media. Indeed, I often talk about social media marketing. So, is it just marketing? mean, it’s okay. my God. Yes, absolutely. So this story I’m going to share is actually kind of a subset of a huge report from the Reuters Institute, the latest global report that was published actually just literally a week or so ago.
Shel Holtz (24:30)
Every company is a media company.
@nevillehobson (24:47)
in June. But one of the clearest signs of how radically the news ecosystem is changing comes from that report. And that’s a bit I want to talk about. For the first time, social media has overtaken television as the main source of news in the US. And by the way, there we have to use the word social to differentiate it from just general media, right? According to Reuters, 54 % of Americans now get their news from platforms like TikTok, YouTube and Instagram.
compared to 50 % who still rely on TV. Now, I’ve been hearing for a long time that, you know, more Americans get the news online than anywhere else. This seems to provide clear evidence of that perspective. And it comes from a highly credible source at the Reuters Institute. I found the reporting, which I’m referencing by the Guardian was really good at summarizing the whole thing in a way that helps me discuss it with you rather than all the huge chunks of data that’s in Reuters report.
But this isn’t just a shift in platforms, it’s a shift in power, according to The Guardian. Influencers and podcasters, not journalists, are increasingly shaping what news gets seen and heard. Joe Rogan, the famous American podcaster, alone reached more than a fifth of Americans in the days after Trump’s reelection. mean, a fifth of Americans? That’s got to be in the least, what, close to 100 million, if not more, people.
especially among younger men, a demographic traditional media often fails to reach. That shift brings both opportunities and deep concerns. Trust and transparency are now front and center, as news increasingly comes from personalities rather than publications. AI chatbots like ChatGPT and Gemini are starting to become news sources themselves, particularly among under 35s, yet users are already questioning their accuracy and reliability. There’s also a darker undercurrent.
Globally, news avoidance is rising fast. In the UK, nearly half the population say they sometimes or often avoid the news altogether. And I tell you, I’m in that group. It’s the highest figure in the study, that UK statistic. Many feel overwhelmed by negativity or simply tune out from what they see as repetitive or irrelevant coverage. In my case, it’s both in this context.
So as a center of gravity shifts from institutions to individuals and from owned newsrooms to algorithm driven feeds, what does this mean for trust, for civic awareness and for the role that communicators like us still have to place to play? What do you reckon, Cheryl?
Shel Holtz (27:16)
there
is so much to unpack here. Let’s start with the fact that people are avoiding the news. I just heard an interview Kara Swisher interviewed Nicole Wallace on her podcast, On with Kara Swisher. For those who don’t know Nicole Wallace, she was the press secretary for President George W. Bush. She worked in the upper echelon of the John McCain presidential campaign.
@nevillehobson (27:18)
Mmm. ⁓
Shel Holtz (27:41)
She grew disillusioned with the Republican Party and has voted with the Democrats in the last couple of elections. And she is the host of Deadline White House, which is a two hour Monday through Friday news program on MSNBC. And she told Kara Swisher that she understands why people are avoiding the news. It’s relentless. You watch an hour block.
of news on CNN, MSNBC, Fox, wherever you prefer to go. And it’s an assault of nonstop distressing stuff. She has, Nicole Wallace, started a new podcast through MSNBC. And it’s not 100 % news. It’s interviews with A-listers just about whatever they want to talk about. She said it always…
finds its way to some news, but it’s not news from beginning to end. And people are hungry for that. And that’s one of the reasons they’re turning off the relentless assault of news and opting for either something that has less of it, is more entertaining and soothing and comforting, or presents the news through a filter that is equally comforting in their bubble.
Interestingly, as you mentioned, a fifth of Americans listen to or watch Joe Rogan. I was reading that he is turning away from Trump lately in his commentary in the episodes where he is political because he’s not always, but that’s going to be an interesting thing to watch to see if he wields the kind of influence that can sink the poll numbers even lower than they are.
But you mentioned using AI tools for the news. I do that, not exclusively, but ChatGPT has the ability to set up tasks. And I have set up tasks to get the latest news on trends in elements of the industry where I work. And every day I check and every now and then I find something really, really interesting and good out of that.
@nevillehobson (29:48)
Yeah.
Shel Holtz (29:51)
It supplements my other monitoring of the media environment. So it’s just one more source and occasionally it reveals something that I wasn’t aware of. But fundamentally what worries me most about this is the selectivity that people may not be aware they’re being subjected to if they…
go to these other sources for news. And frankly, know, watching MSNBC or CNN or Fox is the same. The only way I find out what’s going on in the rest of the world is to watch the BBC. That’s where I find out what’s going on in the Sudan, for example, or in Colombia, because they don’t cover that on the cable news stations in the US. They’re laser focused on
the four or five stories that are going to gin up the most outrage among the audience right now. So it’s all the current politics and that’s what’s turning people off. And I think if the media wants to maintain an audience, they’ve got to figure out how to bring people back, how to make these more palatable because what’s missing is the gatekeeper. And I understand that people don’t like the idea of the gatekeeper. I can pick for myself what I’m interested in.
But if somebody isn’t saying this is important and you need to know about this, this is what was great about reading a newspaper, the old fashioned newspaper is even if you weren’t that interested in the story, you saw the headline and you knew what was going on. Maybe you read the lead and now you knew what was happening in that part of the world that could have an influence on you and your life at some point in the future. Because when you are curating the news,
by following the TikToker who presents the stuff in a style that entertains you, what aren’t you hearing about that you should be hearing about? And somehow we need to get back to having somebody who can curate what’s important. So at least you have a superficial knowledge of what’s going on beyond what’s in that bubble.
@nevillehobson (31:54)
Yeah, that makes sense. Although I argue you could say that particularly the younger generations who are getting the news at such a TikTok, they it’s like they don’t care what they don’t know. And they don’t want someone telling them you should know this. that that that’s a trend without any doubt. In which case, the way you address that, then, is to find a gatekeeper if you like a source that would be trustworthy enough for them to pay attention to. And that’s what needs to happen.
Shel Holtz (32:07)
And that’s worrisome.
Well, exactly.
@nevillehobson (32:20)
I mean, there’s some other metrics that pop out of the kind of a big picture we’ve just kind of discussed that I think, yeah, we need to be really cognizant of what the changes are that are happening here. So the rise of news influencers, we touched on that. And we’ve talked about this a lot in recent episodes. We podcasts, there’s YouTube, there’s TikTok creators. I hear the word creator a lot, influencer a lot in this context as well, particularly among the younger demographics.
So Joe Rogan, as I mentioned, according to this report, he reached 22 % of Americans that week, as I mentioned after Trump’s inauguration. But I’ve read also separately, he himself has been critical of some of the people out there who are so-called sharing news and stuff like that. So is this a generational thing that I say to myself? I suspect it is largely. But the challenge for
or for all of us, I suppose, are the shifts in the platforms. So there’s some statistics from Reuters, YouTube at 30%, Instagram and WhatsApp at around 20%, TikTok 16 % are major players in news dissemination. X is losing liberal users and gaining right leaning ones. There’s no surprise there. But that again, that that has a big impact on this big picture. The challenges of publishers, according to Reuters,
struggling to adapt to video-driven and personality-led content, struggling to adapt to it, not dismissing it or combating it. They’re really struggling with that. Losing commercial value and visibility on platforms they don’t control. Facing a bypass of scrutiny as populist politicians speak directly to people through influencers. Now, that is definitely something that we’re seeing a lot happening over here in Europe, certainly.
News avoidance, we just discussed that, is rising. 40 % globally, at least sometimes, are avoiding it. That’s up from 29 % in 2017. So in five years, 29 % to 40%. That’s a big rise. So the interesting thing I find about the emerging role of AI, to your point, you mentioned that younger users are turning to chat bots like GPT, chat, GPT, Gemini, forgetting the news, not setting up a program that delivers a news to you.
but actually getting the news from those chatbots. I do that occasionally, but I don’t say, I’m done, I’ve got my newsfeed. No, no, no, I’ll do it for something specific where I want the benefit of either perplexity, which was good at this, or I’m not using that so much anymore. Gemini’s most interesting how it’s doing this is finding stuff that I know enough about my own use of those platforms that generally speaking, and this is a very general comment,
I trust what chat GPD tells me, not blindly. Let me tell you that I check most things, not every single thing. But if I’m getting something that I’m going to make use of in some form, I will double check it myself. And I have encountered recently a couple of things where it’s made a mistake. So what do we call that hallucination or whatever? And I’ve challenged it and said, you’re absolutely right. Thank you for pointing that out. I made a mistake. I get that just like a human being might do. So that’s how I tend to regard it.
But this is something that…
Shel Holtz (35:26)
Well, the data says that these days they’re making
fewer mistakes than humans undertaking the same task would make. They’re not perfect, but they’re better than we are.
@nevillehobson (35:35)
But
well, that’s probably true. So I think that’s a that’s a good way to approach it that many of the critics I see about chat GPT notably don’t seem to do it this way, which is to be literally you think of your AI assistant as a person as a colleague you’re working with, and you’re asking it to do a task as you would a colleague to do or a hired contract or whatever it is that you’re doing. Don’t just say to yourself, this is just a program doing stuff. Think of it that way.
And when you challenge it, don’t worry too much about, you know, working for hours on getting a prompt, talk to it conversationally. I do that all the time. And it works well, I find. But this is, this is a useful report. And the reporting I’ve seen not just in the Guardian, but elsewhere that zero in on particular aspects of this are worth paying attention to. And I think the one thing I would say that
you could argue is not emerging anymore. It’s kind of with us. There’s concerns that persist about the accuracy, trust, and transparency in AI-generated news. And that’s something we need to pay close attention to, not to circumvent it or think, now, no, it’s there. That is part of the landscape. So if the younger users, according to surveys like this one, are turning to this, we’ve got to understand that.
and make changes according to our planning and be part of the changes that are happening and the shifts that we are seeing right in front of our eyes. That’s what we need to do.
Shel Holtz (37:01)
Yeah, so there’s two angles on this. One is the mainstream media, the TV news media needs to figure out a way to bring people back, those who are avoiding the news to make it desirable to want to watch this. I don’t know if it’s changes in formats or what. We as communicators need to understand how to get the news into the heads of the people who we want to hear this.
And that means identifying the influencers, the podcasters, getting stuff on YouTube so that people will find it, making it easier for people to find. And getting into those AI-generated search results. Interestingly, I’ve heard recently that the AI-generated search results, particularly the Gemini overviews or the Google overviews, are heavily dependent on Reddit and Quora.
both of which are other sources that people are going to for news. And these are not places where you can just post your news. You have to go in there and engage. So another opportunity for a strategic shift in the communications department.
@nevillehobson (38:10)
Lots to pay attention to I think.
Shel Holtz (38:12)
Yep. Well, there’s another major shift happening right before our eyes in the role of the chief communication officer, a shift that’s only accelerating as political risk becomes business risk. A new study by United Minds that was reported on Provoke Media shows that CEOs, I’m sorry, CCOs are no longer merely putting out fires, providing executive counsel and developing…
basic PR strategies, they’re expected to anticipate political and cultural turbulence and shape organizational strategy accordingly. The study makes it clear that CCOs are now business drivers, not just messengers. In volatile contexts, think fractured politics, rising cultural tensions, corporate affairs leaders are being brought into the room to offer strategic counsel. They’re expected to flag risk.
convene cross-functional war rooms and guide public positions. As Ben Kalovich from United Mains puts it, with an audience of one in DC that can and will quickly strike, CCOs need to lead their organizations to make the right decisions. That’s a weighty responsibility and one that requires a shift from reactive communications to proactive leadership.
In companies that embrace this new model, the CCO serves as a kind of stabilizing board voice, a steady hand while other leaders overreact to daily political noise. Interestingly, that’s kind of what the Melbourne mandate called for, what, 13 years ago from the Global Alliance. They called for PR to be at the center of maintaining that steady guidance through political turbulence and social turbulence.
Anyway, the organizations set up frameworks, monitoring political signals, introducing decision protocols, and convening diverse teams early. And the result of this is anticipation of contentious issues like DEI or AI regulation and the ability to respond with unity and credibility rather than scrambling under pressure.
Not every organization is embracing this shift, though. In more traditional companies, communications is still seen as downstream messaging. Boards and CEOs may say they want early risk warning, but when the CCO raises a flag, they end up getting marginalized. As Dave Tovar of Grubhub noted, CCOs are caught between expectations, knowing they should warn but lacking authority to influence outcomes. Picture it.
telling the company the winds are shifting but not being allowed to change course. This tension between leadership resistance and expectations creates a double bind. Leaders may resist expanding CCOs remit, preferring to keep them in a PR silo, but then when political or reputational risk escalates, they demand answers. The CCO is stuck, expected to prevent or manage a crisis but without the platform or agency to do it.
That gap undermines both credibility and governance and risks turning strategic warning into a career killer if leadership ignores it. We’ve seen indicators that the pressure on CCOs is rising. A recent Axios survey reported a 10.5 % turnover rate among global CCO roles in 2024. That’s up from 8 % the year before. Why? Because these roles are expanding and not every executive ready or empowered
to lead with that level of complexity. A lot of these folks are hired for the moment and then find themselves lacking when volatility demands broader strategic competence. That signals a growing divide between what companies want and what communicators are equipped or intend or invited to deliver. So for CCOs navigating this evolving role, there are a few paths forward. One, step into the advisory space.
Build political risk frameworks and cross-functional coalitions before these crisis emerge. Second, map your internal networks. Engage peers in legal, government affairs, HR, trust and influence are built pre-crisis. And finally, translate your role. Reframe your value not as PR, but as strategic insight, especially to boards and CEOs. But if none of this sticks, leadership…
both expects and empowers, reducing resistance has got to be an area of focus.
@nevillehobson (42:47)
Yeah, that makes sense. It’s a complicated picture you’ve outlined there, Cheryl, I think. But it makes sense for the Chief Communications Officer, in particular that role, to be truly strategic as a valued advisor, a counselor, more than just the words that we see banded about about what the role of a communication professional is. he’s a counselor or advisor to senior leadership.
This goes much, much deeper than that. And I think it’s not new, the depth of this, but in the context of where we’re at today with all the things that are going on in the world that could, well, not so much could impact us, but that we ought to be paying attention to because this is the world in which we are doing business and living. Political risk is a business, is a valuable…
attribute for somebody to be able to provide guidance and insight to leadership on in a way that they are trusted by those leaders to do that. So if you want to get a seat at that table that we hear about as a trusted advisor, this is a route. But it’s complicated, really is high risk and what you outlined the reality of human behaviors and the ways in which we engage with others in a work environment.
be marginalized, you’ll be sidelined, you will not be supported, you’ll be sabotaged, all that stuff, if you don’t do it right. And that sounds a pretty trite way to say it because it’s wider than that. But you need to have all your ducks lined up. You’ve got to have support. You need to have that network to support you. And you need to show your value in supporting others. I this is a diplomat’s role as well. I think I don’t know anyone. Just going through my mind, I know a number of
people with the CCO title in large corporations as well, but not anyone I could think of who I could say, yeah, this person will be a role model for this kind of role. doesn’t mean to there aren’t any, I just don’t know any at the moment. But I think this is a natural evolutionary step for a CCO in a large enterprise in particular, particularly in a, let’s say controversial to some industry, pharmaceuticals comes to mind, actually armaments comes to mind, although that’s probably a…
a hot one to be in that right now in that you don’t need to try and persuade customers to buy your products. But the way in which you are able to navigate the political risk is key. you know, I couldn’t offer more than what I just said, Shell. I think it is a it is a fascinating topic to be discussing, given the context of where we’re at in the world.
Shel Holtz (45:08)
Yeah, the report didn’t list industries that are struggling with this more or less than others, but I suspect one of the toughest places to be a CCO right now is in big tech because you have CEOs, many of them, I’m not going to say all of them, but many of them now see themselves as entitled to rule the world. And are they going to listen to a CCO who says this particular cultural issue
is going to affect us negatively if we don’t get on the right side of it or if we don’t communicate it effectively with key stakeholder audiences, they’re going to do what they want to do. And I imagine that’s a tough place to want to be strategic in terms of what this report is talking about.
@nevillehobson (45:53)
I agree, to which so that adds even greater urgency to one element of the CCO’s activities, which is building strong alliances with senior people in the organization. So it’s not just he or she alone going to the CEO saying, this is what we need to do. He or she’s got the backing of many people that also have an influence with that CEO. It’s easy to discuss this. And I realize that it probably isn’t easy actually in real life to put this into practice.
But that’s what you’re probably going to have to do, I would say.
Do you to say thanks to Dan? I see, I should just notice he’s uploaded it now to…
Shel Holtz (46:29)
Yeah. Yeah. Yeah,
I’ve already downloaded it, but clearly I haven’t listened to it.
You’re up next. Do you want to say? no, I’ll do it. then then we’ll.
@nevillehobson (46:44)
No, can, you think, Dan, yeah.
Shel Holtz (46:53)
Hang on, just need a time code.
Thanks for that report, And also, Dan, congratulations on your job change. I don’t know, Neville, if you heard about this, but Dan is now the chief of staff to the head of the Internet Society. So just big shout out. That’s a tremendous move and congratulations on that.
@nevillehobson (47:10)
indeed I did. I saw Dan posting about it.
So let’s talk about something unusual and increasingly important that’s happening at the intersection of faith, ethics, and technology. At the second annual Rome conference on AI held last week at the Vatican, attended by executives from Google, OpenAI, Meta, and more, Pope Leo XIV made a bold call. AI must be developed within an ethical framework that upholds human dignity, not just innovation for its own sake.
He’s positioning AI ethics as a signature issue of his papacy, something that’s been widely reported in some of the mainstream media, notably the Wall Street Journal just a few days ago. He’s doing this in the same way Pope Leo XIII, so one number less than what Pope Leo XIV is and some hundred years in between, once defended factory workers during the Industrial Revolution.
But this time it’s not about wages or working hours, it’s about what it means to be human in an age of intelligent machines. Crucially, he’s not rejecting technology, he’s confronting its unregulated ambition, warning against the illusion that access to data equals wisdom, and calling attention to the risks to children’s development, justice, and even spiritual well-being. What stands out is how Pope Leo is reframing AI, not as a technical or economic issue, but as a spiritual and societal one.
He’s using the church’s global moral influence to challenge the Silicon Valley narrative, especially the idea that salvation might one day come from code rather than grace. And this brings us as communicators into the frame. As I explored last week in a post on my blog, referencing a deeply analytical report by the Wall Street Journal, we have a strategic role to play here, not just translating complex technologies, but interpreting what they mean for people and society.
We’re often the ones asking the hard questions about trust power and impact inside organizations. So when the Vatican calls for ethical restraint in the face of AI’s rise, it’s not just a headline, it’s a reminder that we too need to help shape the values that drive technological progress. The church is offering one model of how to do that through moral clarity, digital diplomacy, and deep reflection on human dignity. This is not just a church versus tech story.
It’s a lesson in how moral authority, strategic dialogue, and long-term vision can influence how the world adopts powerful technologies. Communicators can draw on this in multiple ways, elevate ethical concerns internally, lead with principles, not just performance, and frame AI not as a product, but as a public conversation about who we are becoming. The Vatican is showing that digital diplomacy doesn’t require dominance. It requires clarity, conviction, and credible values.
Shel Holtz (49:53)
.
@nevillehobson (50:09)
That’s a strategy worth studying, I think.
Shel Holtz (50:12)
It is, and this is going to be an interesting dynamic as Pope Leo makes this the centerpiece of his papacy, at least in the early days, because it is at odds with at least the US government, the current administration’s position on AI, which is all gas, no break. They think that we need to accelerate development and adopt the Zuckerberg philosophy of
@nevillehobson (50:19)
Yeah.
Shel Holtz (50:38)
go fast and break things. And it’s troubling. And it’s good to have that voice out there, but then you have JD Vance, the vice president of the United States, who is a Catholic. I believe he’s a convert to Catholicism. And he’s out there saying, go, go. It’s build, build, build. Get this stuff way out ahead of what every other country is able to do.
@nevillehobson (50:52)
Yeah, I read that.
Shel Holtz (51:03)
The Pope interestingly has some allies and it would be interesting to see if the church does ally itself with some other institutions that are promoting the same message. One of these is the Global Alliance for Public Relations and Communication Management. This is the umbrella organization. In fact, we’ve mentioned them, I think a couple of times so far in this episode. They’re the organization
that represents the world’s public relations and communications associations. They represent close to 400,000 communicators worldwide. IEBC is a member, the Chartered Institute of Public Relations is a member, most of the world’s associations are members. And they have just released the Venice Pledge, so-called because it was hammered out.
in Venice. In fact, they say it’s result of a collaborative AI symposium workshop session held in Venice, Italy, hosted by the Global Alliances European Regional Council in partnership with Therpy, the Italian Federation of Public Relations as part of the Global Alliances Technology Trends and Communication Transformation Month in May. This was signed by the board, passed by the board in July.
Let me read this just so everybody can understand what they’re saying here and see how it aligns with what Pope Leo is saying. The Global Alliance defines responsible AI as the ethical, transparent, and human-centered development and application of artificial intelligence strategically deployed to support, not replace, human judgment, creativity, and communication. It emphasizes accountability, fairness, and accuracy while minimizing
bias, misinformation, and harm. Responsible AI upholds privacy and data protection, reflects professional and organizational values, and ensures proper attribution, governance, and human oversight to maintain trust, integrity, and societal well-being. The seven responsible AI guiding principles are ethics first, human-led governance, personal and organizational responsibility, awareness, openness, and transparency, education and professional development,
active global voice and human centered AI for the common good. And they are asking communicators to sign the pledge. We will have a link to this in the show notes. And if this is something that you agree with, by all means, give it a click and sign the pledge. It’s fairly benign. I don’t see anything particularly controversial there. It is though, think entirely aligned with what Pope Leo is saying and very much at odds.
with the US government’s approach to AI, along with the approach being taken by most of the big players in the industry.
@nevillehobson (53:52)
So I just want to go back to Leo, actually, because this is not an agenda we’ve got yet. So pledges and so forth, I believe, are way too soon for that kind of thing. But I get what the Global Alliance is doing. What this story is about, really, is about the change that is happening, the way in which the Catholic Church is engaging with
hitherto people who are highly critical of what they are saying. So big tech in particular and continuance of a let’s call it digital diplomacy that started around 2020. So five, five years ago under Pope Francis that has led to meetings with the leaders of all the big tech companies that the big six, I suppose you could argue if not including some others, I’m sure.
And this meeting recently that I mentioned is another step forward in that journey where they’re looking to, I guess, illustrate the value of principle dialogue. Although I think it also highlights the limits of voluntary codes and the need for firm accountable governance. And that’s the bit that I think is going to be the critical one. Can Leo as the head of the Catholic Church?
move the needle on that, where we have a lot of talk around the world about regulation, for want of another word, and various things happening, but that hasn’t really moved any needles yet. But I think Pope Leo is going to be a far more tech savvy, regulation minded voice than his predecessor, who was not.
as informed. Both pontiffs shared a deep concern that I’ve read a lot about that, that innovation without ethics risks eroding the very dignity it promises to enhance, and that’s their starting point. So communicate, as you and me and all the others listening to this can help bridge that gap to increase the understanding of that. But I think the fact that the Vatican is
is taking this, it’s certainly not news headline making everywhere, but increasingly I’m seeing reporting on these steps that the Vatican is taking. So as I mentioned over the past decade, actually, they’ve held private meetings with tech leaders. So Mark Zuckerberg, Tim Cook, Brad Smith, Eric Schmidt, and others more recently. Many took place under the umbrella of the Minerva dialogues. That’s another grouping of meetings that took place privately.
convened by a number of very influential voices, senior church leaders in the Vatican, and they’re continuing. And they moved from, reportedly, from enthusiasm about connectivity to deeper concerns about AI. So you’re seeing a convergence meeting of minds on certain aspects of this. So concerns about AI, misinformation, polarization, and this phrase I plucked from a Vatican report, the nature of truth.
I thought, they should get Donald Trump in there then if you want to talk about the nature of truth. Vance would do, I suspect. But the point to me, though, is that I think this is a massive shift in possibilities over this broad topic that everyone in the whole world seems to be struggling with. And yes, we can have pledges. Like you, I think from what you said, I’ve not read it myself, it doesn’t seem to be anything in there that would cause a conflict to anyone.
you’re pledging that you would follow these things and you are likely, in fact, I can’t imagine anyone is going to say no to that. Anyone you’re to take seriously and say, no, I’m not going to follow these things. Of course you are. But that’s not, well, he’s not a person in our circle of conversational focus even. But I think what we’re talking about here is a sea change that is only just emerging into the public space. And it’s
Shel Holtz (57:17)
Elon Musk wouldn’t sign on to that pledge.
@nevillehobson (57:34)
early days yet. mean, Pope Leo has only been in the role for what, two months, less than that even. But we’ve got the moves the Catholic Church make. And the reason why I think it’s so significant is they’re engaging with Silicon Valley on the one hand. They are now promoting quite strongly the ethical frameworks that they have had discussions with various people on. So another one, the Rome call for AI ethics, this body in the Vatican called the Pontifical Academy for Life.
That’s a pledge that Microsoft, IBM and Cisco have signed and that was launched in 2020. That laid out the principles of transparency, inclusion and responsibility. But one thing I found interesting, Charles, Google and OpenAI have not signed it, not yet. So that highlights the unresolved tension between tech autonomy and ethical oversight. So there’s a hurdle to get over at some point. But shaping global discourse.
This is something I remember this Pope Francis, he spoke at the 2024 G7 summit warning of a technological dictatorship and calling for legally binding treaty on AI governance. The 2025 G7 meeting in Canada has just happened. No news about that. But I think Pope Leo undoubtedly is going to carry that mission forward. But here’s the thing. Firmer, more technically informed posture. He’s going to talk like he knows the topic he’s talking about.
So these to me are converging into something quite interesting. Symbolic and narrative power is another one. And this pope is very savvy on all of this. So for instance, referring back, you remember this, I’m sure you will. AI generated image of Pope Francis in a white puffer coat went viral in 2023. It exposed the public’s vulnerability to deep fakes and the church’s symbolic visibility in digital culture. But rather than dismiss it as a joke,
Pope Francis used the moment to amplify concerns about truth, trust, and the limits of data, which is an example of value-led narrative shaping. So all these elements are happening. So I think Pledge is great. And I think it’d be good for other professional bodies to either support this as a single initiative or come out with their own. Where is the harm in doing this? It’s not affecting anything that’s going on here. think it’s, let’s not forget.
This is aimed at a wider societal grouping as oppos
06/22/25 | 0 Comments | FIR #469: Is Internal Communication Failing?
FIR #466: Still Hallucinating After All These Years
-
FIR #466: Still Hallucinating After All These Years
Not only are AI chatbots still hallucinating; by some accounts, it’s getting worse. Moreover, despite abundant coverage of the tendency of LLMs to make stuff up, people are still not fact-checking, leading to some embarrassing consequences. Even the legal team from Anthropic (the company behind the Claude frontier LLM) got caught.
Also in this episode:
- Google has a new tool just for making AI videos with sound: what could possibly go wrong?
- Lack of strategic leadership and failure to communicate about AI’s ethical use are two findings from a new Global Alliance report
- People still matter. Some overly exuberant CEOs are walking back their AI-first proclamations
- Google AI Overviews lead to a dramatic reduction in click-throughs
- Google is teaching American adults how to be adults. Should they be finding your content?
In his tech report, Dan York looks at some services shutting down and others starting up.
Links from this episode:
- Veo 3 News Anchor Clips
- Google has a new tool just for making AI videos
- Chicago Sun-Times publishes made-up books and fake experts in AI debacle
- How an AI-generated summer reading list got published in major newspapers
- Chicago Sun-Times publishes made-up books and fake experts in AI debacle
- Anthropic’s lawyer was forced to apologize after Claude hallucinated a legal citation
- Chicago Sun-Times Faces Backlash After Promoting Fake Books In AI-Generated Summer Reading List
- Groundbreaking Report on AI in PR and Communication Management
- Comms failing to provide leadership for AI
- Perplexity Response to Query about Failure to Implement AI Strategically
- Google is Teaching American Adults How to Be Adults
- Google AI Overviews leads to dramatic reduction in clickthroughs for Mail Online
- Shocking 56% CTR drop: AI Overviews gut MailOnline’s search traffic
- Google AI Overviews decrease CTRs by 34.5%, per new study
- Company Regrets Replacing All Those Pesky Human Workers With AI, Just Wants Its Humans Back
- How Investors Feel About Corporate Actions and Causes
Links from Dan York’s Tech Report
- Skype shuts down for good on Monday: NPR
- Glitch is basically shutting down
- Investing in what moves the internet forward
- Bluesky: “We’re testing a new feature! Starting this week, select accounts can add a livestream link to sites like YouTube or Twitch, and their Bluesky profile will show they’re live now.”
- Bridgy Fed
- Fedi Forum
- Take It Down Act 2025 (USA)
- Mike Macgirvin
The next monthly, long-form episode of FIR will drop on Monday, June 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email .(JavaScript must be enabled to view this email address).
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz (00:01)
Hi everybody and welcome to episode number 466 of Four Immediate Release. I’m Shel Holtz in Concord, California.
@nevillehobson (00:10)
and I’m Neville Hobson in the UK.
Shel Holtz (00:13)
And this is our monthly long form episode for May 2025. We have six reports to share with you. Five of them are directly related to the topic du jour of generative artificial intelligence. And we will get to those shortly. But first, Neville, why don’t you tell us what we talked about in our ⁓ short form midweek episodes since
You know, my memory’s failing and I don’t remember.
@nevillehobson (00:44)
⁓ Yeah, some
interesting topics we’ve had a handful of short form episodes, 20 minutes more or less, since the last monthly, which we published on 28th of April. And I’ll start with that one because that takes us forward. That was an interesting one with a number of topics. The headline topic was cheaters never prosper, said, unless you pay for what you create.
And that was related to a university student who was expelled for developing an AI driven tool to help applicants to software coding jobs cheat on the tests employers require them to take. And it had mixed views all around with people thinking, hey, this is cool. And it’s not a big deal if people cheat others who are abhorred by it is an abhorrent idea. I’m in that camp. I think it’s a dreadful idea that ⁓ most people think it’s not a bad thing. is. Cheating is not good. That’s my view.
There were a lot of other topics too in that as well. A handful of others that were really, really good. How communicators can use seven categories of AI agents and a few others worth a listen. That was 90 minutes, that one. That’s kind of hitting the target goal we had for the long form content. If it’s too long, hit the pause button and come back to it. Might apply to this episode too.
So that was 462 at end of April. That was followed on May the 7th by 463 ⁓ that ⁓ talked about delivering value with generative AIs endless right answers. This was really quite intriguing one. ⁓ Quoting Google’s first chief decision scientist who said that one of the biggest challenges of the gen AI age.
is leaders defining value for their organization. And one of the considerations she says is mindset shift in which there are endless right answers. So you create something that’s right, you repeat the prompt and get a different for images, for example, and get a different one, it’s also right. And so she posed a question, which one is right? It’s an interesting conundrum type thing. But that was a good one. We had 16 minutes that one. And
Shel Holtz (03:01)
We had a comment on that
one, too, from ⁓ Dominique B., who said, sounds like it’s time for a truthiness meter.
@nevillehobson (03:02)
We have a comment? Yeah, we do.
Okay, what’s what are those?
Shel Holtz (03:13)
Stephen Colbert fans here in the US would understand truthiness. It’s a cultural reference.
@nevillehobson (03:18)
Okay.
Got it. Good. Noted. ⁓ Then 464. This was truly interesting to me because it’s basically saying that as we’ve talked about and others constantly talk about, you should disclose when you’re using AI ⁓ in some way that illustrates your honesty and transparency. Unfortunately, research shows that the opposite is true.
that if you disclose ⁓ that you’ve used AI to create an output, ⁓ you’re likely to find that your audiences will lose trust in you as soon as they see that you’ve disclosed this. That’s counterintuitive. You think disclosing and being transparent on this is good. It doesn’t play out according to the research. ⁓
It’s an interesting one. I think I’d err on the side of disclosure more than anything else. Maybe it depends on how you disclose. But it turns out that people trust AI more than they trust the humans using AI. And that we spent 17 and half minutes on that one show. That was a good one. You got a comment too, I think, have we not?
Shel Holtz (04:31)
from Gail Gardner who says, that isn’t surprising given how inaccurate what AI generates is. If a brand discloses that they’re using AI to write content, they need to elaborate on what steps they take to ensure the editor fact checks and improves it, which I think is a good point.
@nevillehobson (04:48)
wouldn’t disagree with that. Then 465 on May the 21st, the Trust News video podcast PR trifecta. That’s one of your headlines, Cheryl. I didn’t write that one. So ⁓ it talks about unrelated trends or seemingly unrelated trends, painting a clear picture for PR pros accustomed to achieving their goals through press release distribution and media pitching. The trends are that people trust each other less than ever.
people define what news is based on its impact on them becoming their own gatekeepers. And video podcasts have become so popular that media outlets are including them in their up-fronts. So we looked at finding a common thread in our discussion among these trends and setting out how the communicators can adjust their efforts to make sure the news is received and believed. That was a lengthier one than usual. 26 minutes that came in at has always this great stuff to…
to consume. So that brings us in fact to now this episode 466 monthly. So we’re kicking off the wrap up of May and heading into a new month in about a week or so.
Shel Holtz (05:59)
We also had an FIR interview dropped this month.
@nevillehobson (06:03)
we did. Thank you for the gentle nudge on mentioning that. That was our good friend Eric Schwartzman, who wrote an intriguing post or article, I should say, in Fast Company about bot farms and how they’re invading social media to hijack popular sentiment. Lengthy piece, got a lot of reaction on LinkedIn, ⁓ likes and so forth in the thousands, some hundreds of comments. So we were lucky to get him for a chat.
It’s a precursor to a book he’s writing based on that article that looks at bot farms. They now outnumber real users on social networks, according to Eric’s research and how profits drive PR ethics. Why meta TikTok X and even LinkedIn are complicit in enabling synthetic engagement at scale, says Eric. So ⁓ lots to unpack in that. That was a 42 minute conversation with Eric. His new book,
called Invasion of the Bot Farms. He’s currently preparing for that. He’ll explore the escalating threat, he says, through insider stories and case studies. That was a good conversation with Eric Schell. It’s an intriguing topic, and he really has done a lot of research on this.
Shel Holtz (07:16)
And we do have a comment on that interview from Alex Brownstein, who’s an executive vice president at a bioethics and emerging sciences organization who says, chat GPT and certain other mainstream AIs are purportedly designed to seek out and prioritize credible, authoritative information to inform their answers, which may provide some degree of counterbalance.
And also since the last monthly episode, there has been an episode of Circle of Fellows. This is the monthly panel discussion featuring usually four IABC fellows. That’s the International Association of Business Communicator. I moderate most of these. I moderated this one. And it was about making the transition from
being a communication professional to being a college or university professor teaching communication. And we had four panelists who have all made this move. Most of them have made it full-time and permanent. are
teachers and not working in communications anymore. One is still doing both. And they were John Clemens, Cindy Schmig, Mark Schuman and Jennifer Waugh. It was a great episode. It’s up on the FIR podcast network now. The next circle of fellows is gonna be an interesting one. It is going to be done live. This is the very first time this will happen, episode 117.
So we’ve done 116 of these as live streams and this one will be live streamed, but it’ll be live streamed from Vancouver, site of the 2025 IEBC World Conference and Circle of Fellows is going to be one of the sessions. So we’re gonna have a table up on the platform with.
the five members of the 2025 class of IAVC fellows and me moderating. And in the audience, all the other fellows who are at conference will be out there among those who are attending the session and we’ll have the conversation. Brad Whitworth will have a microphone. He’ll be wandering through the audience to take questions. It’ll be fun. It’ll be interesting. It will be live streamed as our Circle of Fellows episode for June. So.
watch the FIR Podcast Network or LinkedIn for announcements about ⁓ when to watch that episode. Should be fun.
@nevillehobson (09:54)
Okay, that does sound interesting. Shell, what date is it taking place? You know.
Shel Holtz (10:00)
It’s going to be Tuesday, June 10th at 1030 a.m. Pacific time. It’s the last session before lunch. So even though IABC has only given us 45 minutes for what’s usually an hour long discussion, we’re going to take our hour. People can, you know, if they’re really hungry, their blood sugar is dropping, they can leave. But we’ll be there for the full hour for this circle of fellows.
@nevillehobson (10:27)
I was just thinking, the last time I was in Vancouver was in 2006, and that was for the IBC conference in 2006. That’s nearly 20 years ago. I where’s time gone, for goodness sake?
Shel Holtz (10:37)
I don’t know. I’ve been looking for it. So as I mentioned, we have six great reports for you and we will be back with those right after this.
@nevillehobson (10:40)
No, that was good.
At the Google I.O. last week, that’s Google’s developer conference, amongst many other things, the company unveiled a product called V.O.3, that’s V.E.O., V.O.3, its most advanced AI video generation model yet. It’s already sparking equal parts wonder and concern. V.O.3 isn’t just about photorealistic visuals. It marks the end of what TechRadar calls the silent era of AI video.
by combining realistic visuals with synchronized audio. Dialogue, soundtracks and ambient noise all generated from a simple text prompt. In short, it makes videos that feel real with few, if any, the telltale glitches we’ve come to associate with synthetic media. ZDNet and others included in a collection of links on Techmeme describe VO3 as a breakthrough in marrying video with audio, simulating physics, lip syncing with uncanny accuracy,
and opening creative doors for filmmakers and content creators alike. But that’s only one side of the story. The realism VO3 achieves also raises alarms. Exios reports that many viewers can’t tell VO3 clips from those made by human actors. In fact, synthetic content is becoming so indistinguishable that the line between real and fake is beginning to dissolve. Alarm is a point I made in a post on Blue Sky.
earlier last week when I shared a series of amazing videos created by Alejandra Caravejo at Harvard Law Cyber Law Clinic, portraying TV news readers reading out a breaking news story she created just from a simple text prompt. What comes immediately to mind, I said, ⁓ is the disinformation uses of such a tool. What on earth will you be able to trust now? One of Alejandra’s comments in the long thread was,
This is going to be used to manipulate people on a massive scale. Others in that thread noted how easily such clips can be repeated and recontextualized with no visual watermark to distinguish them from real broadcast footage. I mean, one thing is for sure, Sal, if you’ve watched any of these, they’re now peppered all over LinkedIn and Blue Sky and most social networks. You truly are going to have your jaw dropping when you see some of these things. It’s not hard to visualize.
just hearing an audio description, but they truly are quite extraordinary. This is a whole new level. There’s also the question of cost and access. ⁓ VO3 is priced at a premium around $1,800 per hour for professional grade use, suggesting a divide between those who can afford powerful generative tools and those who can’t. So we’re not just talking about a creative leap. We’re staring at an ethical and societal challenge too.
Is VO3 one of the most consequential technologies Google has released in years, not just for creators, but for good and bad actors and society at large? How do you see it, Joe?
Shel Holtz (14:00)
First of all, it’s phenomenal technology. I’ve seen several of the videos that have been shared. saw one where the prompt asked it to create a TV commercial for a ridiculous ⁓ breakfast cereal product. was ⁓ Otter Crunch or something like that. And it had a kid eating Otter Crunch at the table and the mom holding the box and saying Otter Crunch is great or whatever it was that she said.
⁓ and you couldn’t tell that this wasn’t shot in a, in a studio. ⁓ it was, it was that good. Alarm? I’m surprised that there is alarm because we have known for years that this was coming. ⁓ and I, I don’t think it should be a surprise that it has arrived at this point, given the quality of the video services that we have seen from other providers and
This is a game of leapfrog so that you know that one of the other video providers is going to take what Google has done and take it to the next level, maybe allowing you to make longer videos or there will be some bells and whistles that they’ll be able to add and the prices will drop. This is a preliminary price. It’s a brand new thing. We see this with open AI all the time where the first
time they release something, have to be in that $200 a month tier of customer in order to use it. But then within a couple of months, it’s available at the $20 a month level or at the free level. So this is going to become widely available from multiple services. I think we need to look at the benefits this provides as well as the risk.
that it provides. This is going to make it easy for people who don’t have big budgets to do the kind of video that gets the kind of attention that leads to sales or whatever it is your communication objective was for enhancing videos that you are producing with actual footage in order to create openers or bridges or
just to extend the scene, it’s going to be terrific. Even at $1,800 an hour, there are a lot of people who can’t get high quality video for $1,800 an hour. So this is going to be a boon to a lot of creators. In terms of the risk, again, I think it’s education, it’s knowing what to look for.
getting the word out to people about the kinds of scams that people are running with this so that they’re on their guard. It’s going to be the same scams that we’ve seen with less superior technology. It’s going to be, you know, the grandmother con, right? Where you get the call and it sounds like it’s your grandson’s voice. I’ve been kidnapped. They’re demanding this much money. Please send it. Sure sounds like him. So grandma sends the money. So
This is the kind of education that has to get out there ⁓ because it’s just gonna get more realistic and easier to con people with the cons that frankly have been working well enough to keep them going up until now.
@nevillehobson (17:38)
Yeah, I think there is real cause for major alarm at a tool like this. You just set out many of the reasons why, but I think the risk mostly comes more from or rather less from examples like the grandmother call saying, you know, someone calling the grandmother, I’ve been kidnapped. I don’t know anyone that’s ever happened to him, not saying it doesn’t, but that doesn’t seem to me to be like a major daily thing. might more pro-Zec, more fundamental than that. But
Some of the examples you can see and the good one to mention is the one from Alejandra Carabagio, the video she created, which were a collection of clips ⁓ with the same prompt. they were all TV anchors, presenters on television, ⁓ talking about breaking news that J.K. Rowling had drowned because a yacht sank after it was attacked by orcas in the Mediterranean off the coast of Turkey.
⁓ What jumped at me when I saw the first one was, my God, this was so real. It looked like it was a TV studio, all created from that simple prompt. But then came three more versions, all with different accented English, American English, US English, English as a second language for one of the presenters that illustrates from that one prompt, what you could do. And she said that the first video took literally a couple of seconds.
And within less than 10 minutes after tweaking a couple of things after a number of attempts, she had a collection of five videos. So imagine that there are benefits, unquestionably. And indeed, some of the links we’ve got really go through some significant detail of the benefits of this to creators. But right on the back of that comes this big alarm bell ring. This is what the downside looks like. And I think
your point about ⁓ it’s going to come down, competitors will emerge. Undoubtedly, I totally agree with you. But that isn’t yet. In the meantime, this thing’s got serious first mover advantage and the talk up that I’m seeing across the tech landscape mostly, it hasn’t yet hit mainstream talk. I’m not sure how you kind of explain it in a way that excites people unless you see the videos. But
This is big alarm bell territory, in my opinion, and I think it’ll accelerate a number of things, one of which is more calls to regulate and control this if you can. you know, who knows what Trump’s going to do about this? Probably embrace it, I would imagine. I mean, you’ve seen what he’s doing already with the video and stuff that promotes him in his his emperor’s clothes and all this stuff. So this is, a major ⁓ milestone, I think, in the development of these technologies.
it will be interesting to see who else comes out in a way that challenges Google. But if you read Google’s very technically focused description, this is not a casual development by six guys with a couple of computers. This is required, I would imagine, serious money and significant quantum computing power to get it to this stage in a way that enables anyone with a reasonably powered computer to use it and create something. ⁓
got that that aspect to consider should we be doing something like this that generates huge or rather uses huge amounts of electricity and energy and all the carbon emissions we got that side of the debate that’s beginning to come out a little bit. So it’s experimental time without doubt. And there are some terrific learnings we can get from this. mean, I’d love to give it a go myself, but not at 1800 bucks. So if I had someone to do it for that was I could charge them for that I’d be happy.
⁓ But I’m observing what others are doing and hearing what people are saying. And it’s picking up pace. Every time I look online, there’s something new about this. Someone else has done something and they’re sharing it. So great examples to see. So yes, let’s take a look at what the benefits are and let’s see what enterprises will make of this and what we can learn from it. But I’m keeping a close eye on what others are saying about the risks because ⁓ we haven’t, you talk about the education, all that stuff.
No one seems to have paid any attention to any of that over the years. So why are going to pay attention to this now if we try and educate them?
Shel Holtz (22:06)
Well,
that really depends on how you go about this. Who’s delivering the message? I mean, where I work, we communicate cybersecurity risk all the time. And we make the point this isn’t only a risk to our company. This is a risk to you and your family. You need to take these messages home and share them with your with your kids. And every time something new comes out, where there’s a new scam, where we are aware
@nevillehobson (22:10)
It does. ⁓
show.
Shel Holtz (22:34)
And we usually hear about this through our IT security folks, but where we are aware that in our industry, somebody was scammed effectively with something that was new. We get that out to everybody. We use multiple channels and we get feedback from people who are grateful for us telling them this. So it’s not that people won’t listen. You just have to get them in a way that resonates with them.
And you have to use multiple channels and you have to be repetitive with this stuff. You have to kind of drill it into their heads. see organizations spending money on PSAs on TV alerting people to these scams. They’re all imposter scams is what it comes down to. It’s pretending to be something that they aren’t. know, what troubles me about this
I think is that we are talking a lot about erosion of trust. We talked about it on the last midweek episode, the fact that people trust each other less than they ever have. Only 34 % of people say they trust other people, that other people are trustworthy. And we’re trying to rebuild trust at the same time we’re telling people, you can’t trust what you see. You can’t trust your own eyes anymore. So this is a challenging time.
@nevillehobson (23:54)
Right.
Shel Holtz (24:00)
without any question when you have to deal with both of these things at the same time. We need to build trust at the same time. We’re telling people you can’t trust anything.
@nevillehobson (24:02)
It is.
Well, that is the challenge. absolutely right, because ⁓ people don’t actually need organizations to tell them that. They can see with their own eyes, but it’s then reinforced by what they’re hearing from governments. We’ve got an issue that I think is very germane to bring this into conversation, something in this country that is truly extraordinary. One of the biggest retailers here, Marks & Spencer.
was the subject of a huge cyber attack a month ago, and it’s still not solved. Their websites, you still can’t do any buying online. You can’t do click and collect none of those things. Today, they announced you can now again, log on to the website and browse. You can’t buy anything. You can’t pay electronically. You can only do it in the stores. And that no one seems to know precisely what exactly it is. There’s so much speculation, so much ⁓ talk that
of which most is uninformed, which is fueling the worry and alarm about this. And the consequences from Marks & Spencer are potentially severe from a reputational point of view and brand trust, all those things. haven’t solved this yet. That, people are saying, that was likely caused by an insecure password login by someone who is a supplier of Marks Spencer. But this is not like
little store down the road. This is a massive enterprise that has global operations. And the estimates at the moment is that the cost to them is likely to be around 300 million pounds. It’s serious money. They’re losing a million pounds a day. It’s serious. Oh, they won’t disclose it. It’s illegal to do that here in the UK to pay the ransom, if you disclose it. Government advice from the cyber security folks is don’t pay the ransom. Difficult thing to me is that you follow that advice and they’re still not solving the problem.
Shel Holtz (25:45)
And what was the ransom?
@nevillehobson (26:03)
The point I’m making, is that this is just another example of ⁓ forged trust, if I could say it that way, that it was likely until information arrives telling exactly what it was, that someone persuaded someone to do something who they thought was someone else that they weren’t that enabled that person to get access. Right. So this is going to be like that for some of the examples we’ve seen. But I think it’s likely as well to be ⁓
Shel Holtz (26:23)
Yeah, sure. It was fishing.
@nevillehobson (26:33)
kind of normal that you would almost find impossible to even imagine that it was a fake. So what’s going to happen when the JK Rowling example, like someone in a prominent position in society or whatever, it’s suddenly ⁓ on a website somewhere that gets picked up and repeated everywhere before it’s well, wait a minute, is just to what’s the source of this, but it’s too late by then. And that’s likely what we’re going to see.
Shel Holtz (26:58)
We
reported on a story like this many years ago. It was, if I remember correctly, a bank robbery in Texas. It was a story that got picked up by multiple news outlets. It was completely fake. The first outlet that picked it up just assumed that it was accurate because of their source and all the other newspapers.
picked it up because they assumed that the first newspaper that picked it up had checked their facts, but it was a false story. This is nothing new. It’s just with this level of realistic video, it’s going to be that much easier to convince people that this is real and either share it or act on it.
@nevillehobson (27:40)
as it will.
And it won’t be waiting on the media to pick up and report on it. That’s too slow. It’ll be TikTokers, it’ll be YouTube. It’s anyone with a website that has some kind of audience that’s connected and it’ll be amplified big time like that. So it’ll be out of control within probably within seconds of the first video appearing. That’s not to say that, dear, know, this is so what do we do? We’ve got to be that that’s that is the landscape now. And honestly and truly can’t imagine how
example of like a JK Rowling death at sea and all that stuff is on on multiple TV screens, supposedly TV studios that you don’t think when you’re watching hang on, is this legit this TV show you might occur to you, but the other nine people out there watching along with you aren’t gonna ask themselves that they’re gonna share it. And it’s suddenly it’s out there. And before you know it. I don’t know.
If it’s ⁓ say the CEO of big company that’s happened at a time of some kind of merger or takeover going on and then that person suddenly dropped dead, that’s the kind of thing I’m thinking about. So ⁓ I can see the real need to have some kind of, I can’t even call it shell regulation, I’m not sure, I don’t know, by government or someone.
alongside, you can’t just leave this to individual companies like yours who are doing a good job. Well, there are 50 others out there who aren’t doing this at all. So you can’t you can’t let it sit like that. Because this, the scale of this is breathtaking, frankly, what’s going to happen. And I think Alejandro Caravaggio and others I’ve seen saying the same thing, that ⁓ that, ⁓ you know, this is going to be a tool used to manipulate people on a massive scale. We’re not talking about business.
employees necessary, the public at large, this is going to manipulate people. And we’re already seeing that at small scale, based on the tech we have now. This tech’s up notches, in my view. you know, 1800 bucks, people are going to do this, ⁓ that to them, it’s like, you know, petty cash almost, or someone’s going to come out with something, again, that isn’t going to be that and it’s on a dark web somewhere and you know.
So I mean, I’m now getting into areas that I have no idea what I’m going to be talking about. So I will stop that now. I don’t know how that’s going to work. this requires attention, in my opinion, that to protect people and organizations from the bad actors, that euphemistic phrase, who are intent on causing disruption and chaos. And this is potentially what this will achieve alongside all that good stuff.
Shel Holtz (30:19)
It’ll be interesting to hear what Google plans to do to prevent people from using it for those purposes. I have access to…
@nevillehobson (30:26)
They have a bit an FAQ,
which talks a talks a little bit about that. hey, this is like draft still, I would say.
Shel Holtz (30:33)
I have access to VO2 on my $20 a month Gemini account, I’ll just wait the six weeks until VO3 is available there.
@nevillehobson (30:44)
Well, things may have moved on to who knows what in six weeks, I would say. But nevertheless, this is an intriguing development technologically and what it lets people do in a good sense is the exciting part. The worrying part is what the bad guys are going to be doing.
Shel Holtz (31:03)
to say. So I need to make a time code note.
@nevillehobson (31:04)
Yeah.
Shel Holtz (31:18)
The fact that generative AI chatbots hallucinate isn’t a revelation, at least it shouldn’t be at this point, and yet AI hallucinations are causing real, consequential damage to organizations and individuals alike, including a lot of people who should know better. And contrary to logic and common sense, it’s actually getting worse.
Just this past week, we’ve seen two high-profile cases that illustrate the problem. First, the Chicago Sun-Times published what they called a summer reading list for 2025 that recommended 15 books. Ten of them didn’t exist. They were entirely fabricated by AI, complete with compelling descriptions of Isabelle Indy’s non-existent climate fiction novel Tidewater Dreams and Andy Weir’s imaginary thriller The Last Algorithm.
The newspaper’s response? Well, they blamed a freelancer from King Features, which is a company that syndicates content to newspapers across the country. It’s owned by Hearst. That freelancer used AI to generate the list without fact checking it. And the Sun-Times published it believing King Features content was accurate. And other publications shared it because the Chicago Sun-Times had done it.
Then there’s even more embarrassing case of Anthropic. That’s the company behind the Claude AI chatbot, one of the really big international large language models, frontier models. Their own lawyers had to apologize to a federal judge after Claude hallucinated a legal citation and a court filing. The AI generated a fake title and fake authors for what should have been a real academic paper. Their manual citation checks
missed it entirely. Think about that for a moment. A company that makes AI couldn’t catch its own tools’ mistakes, even with human review. Now, here’s what’s particularly concerning for those of us in communications. This isn’t getting better with newer AI models. According to research from Vektara, even the most accurate AI models still hallucinate at least 0.7 % of the time.
with some models producing false information in nearly one of every three responses. MIT research from January found that when AI models hallucinate, they actually use more confident language than when they’re producing accurate information. They’re 34 % more likely to use phrases like definitely, certainly, and without doubt when they’re completely wrong. So what does this mean for PR and communications professionals? Three critical things. First.
We need to fundamentally rethink our relationship with AI tools. The Chicago Sun-Times incident happened just two months after the paper laid off 20 % of its staff. Organizations under financial pressure are increasingly turning to AI to fill gaps, but without proper oversight, they’re creating massive reputation risks. When your summer reading list becomes a national embarrassment because you trusted AI without verification, you got a crisis communication problem on your hands.
@nevillehobson (34:04)
.
Shel Holtz (34:28)
Second, the trust issue goes deeper than individual mistakes. As we mentioned in a recent midweek episode, research shows that audiences lose trust as soon as they see AI disclosure labels, but finding out you used AI without disclosing it is even worse for trust. This creates what researchers call the transparency dilemma. Damned if you disclose, damned if you don’t. For communicators who rely on credibility and trust, this is a fundamental challenge we haven’t come to terms with.
Third, we’re seeing AI hallucinations spread into high-states environments where the consequences are severe. Beyond the legal filing errors, we’ve seen multiple times now, from Anthropic to the Israeli prosecutors who cited non-existent laws, we’re seeing healthcare AI that hallucinates medical information 2.3 % of the time, and legal AI tools that produce incorrect information in at least some percentage of cases that could affect real legal outcomes.
The bottom line for communication professionals is that AI can be a powerful tool, but it is not a replacement for human judgment and verification. I know we say this over and over and over again, and yet look at the number of companies that use it that way. The industry has invested $12.8 billion specifically to solve hallucination problems in the last three years, yet we’re still seeing high profile failures from major organizations who should know better.
My recommendation, if you’re using AI in your communications work, and let’s be honest, most of us are, insist on rigorous verification processes. Don’t just spot check. Verify every factual claim, every citation, every piece of information that could damage your organization’s credibility if it’s wrong. And remember, the more confident AI sounds, the more suspicious you should be.
The Chicago Sun-Times called their incident a learning moment for all of journalism. I’d argue it’s a learning moment for all of us in communications. We can’t afford to let AI hallucinations become someone else’s crisis communications case study.
@nevillehobson (36:37)
until the next one. Right. mean, listen to what you say. You’re absolutely right. Yet, the humans are the problem. Arguably, and I’ve heard this, they’re not, it’s the technology is not up to scratch. Fine. Right. In that case, you know that. So therefore, you’ve got to pay very close attention and do all the things that you outlined before that people are not doing. So this one is extraordinary.
Shel Holtz (36:39)
And it becomes a case study. ⁓
The humans are the solution.
@nevillehobson (37:05)
⁓ Snopes has a good analysis of it talking about this. ⁓ King Features, mean, their communication about it, they said, the company has a strict policy with our staff, cartoonists, columnists, and freelance writers against the use of AI to create content. And they said it will be ending its relationship with the guy who did this. Okay, throw him under the bus, basically. So you don’t have guidance in place properly, even though
you say you have a strict policy, that’s not the same thing, is it? So I think this was inevitable and we’re going to see it again, sure, we will and the consequences will be dire. I was reading a story this morning here in the UK of a lawyer who was an intern. That’s not her title, but she was a junior person that she ⁓ entered into evidence, some research she’d done without checking and it was all fake, done by the AI. And the case
turns out, and again, this is precisely the concern, not the tech. It’s not her fault. She didn’t have proper supervision. She was pressured by people who didn’t help because she didn’t know enough. And so she didn’t know how to do something. And she was under tight parameters to complete this thing. So she did it. No one checked her work at all. So she apologized and all that stuff. And yes, the judge, from what I read, isn’t isn’t penalizing her. It’s her boss. He should be penalizing.
You’re going to see that repeated, I’m sure already exists in case up and down businesses, organizations everywhere, where that is not an unusual setup structure, lack of support, lack of training, ⁓ lack of encouragement, indeed, the whole bring it out, let’s get the policy set up guidance and not just publish it on the internet. We bring it to people’s attention. We embrace them. We encourage them.
We bring them on board to conversations constantly, brown bag lunches, all the informal ways of doing this too. And I’m certain that happens a lot. But this example and others we could bring up and mention show that it’s not in those particular organizations. So the time will come, I don’t believe it’s happened yet, ⁓ where the most monumentally catastrophic clanger will be dropped sooner or later in an organization, whether it’s a government.
whether it’s a private company, whether it’s a medical system or whatever, that could have life or death consequences for people. Don’t believe that’s happened yet that we know of anyway, but the time is coming where it’s going to, I’d say.
Shel Holtz (39:36)
it will,
it undoubtedly will. And you’ll see medical decisions get made based on a hallucination that somebody didn’t check. What strikes me though is that we talk about AI as an adjunct, right? It is an enhancement to what humans do. It allows you to offload a lot of the drudgery so that you can focus your time on more.
human-centric and more strategic endeavors, which is great, but you still have to make sure that the drudge work is done right. I mean, that work is being done for a reason. It may be drudgery to produce it, but it must have some value or the organization wouldn’t want it anymore. So it’s important to check those. And in organizations that are cutting head count,
@nevillehobson (40:06)
Ahem.
Shel Holtz (40:29)
You know, what a lot of employees are doing is using AI in order to be able to get all their work done. That drudge work, having the AI do that and spend 15 minutes on it instead of three hours. It’s not like those three hours are available to them to fact check. They’ve got other things that they need to do. Organizations that are cutting staff need to be cognizant of the fact that they may be cutting the ability to fact check the output of the AI.
which could do something egregious enough to cost them a whole lot more than they saved by cutting that staff. And by the way, I saw research very recently, I almost added it as a report in today’s episode that found that investors are not thrilled with all the layoffs that they’re seeing in favor of AI. They think it’s a bad idea. So if you’re looking for a way to…
get your leaders to temper their inclinations to trim their staff. You may want to point to the fact that they may lose investors over decisions like that, but we need the people to fact check these things. And by the way, I have found an interesting way to fact check and it is not an exclusive approach to this.
But let me give you just this quick example. On our intranet every week, I share a construction term of the week that not every employee may know. And I have the description of that term written by one of the large language models. I don’t know what these things mean. I’m not a construction engineer.
So I get it written, and then the first thing I do is I copy it, and then I go to another one of the large language models and paste it in, and I say, review this for accuracy and give me a list of what you would change to make it more accurate. And most of the time it says, this is a really accurate write-up that you’ve got of this term. I would recommend to enhance the accuracy that you add these things.
So I’ll say, ahead and do that, write it up and make those things. Then I’ll go to a third large language model and ask the same question. I’ll still go do a Google search and find something that describes all of this to make sure I’ve got it right. But I find playing the large language models against each other as accuracy checks works pretty well.
@nevillehobson (42:56)
Yeah, I do a similar thing to not for everything. mean, like everyone who’s got the time to do all that all the time, but depends, I think, on what you’re doing. But ⁓ it is something that we need to we need to pay attention to. And in fact, this is quite a good segue to our next piece, our next story, where artificial intelligence plays a big role. this one ⁓ talks about ⁓
outlander really a new report from the Global Alliance of Public Relation and Communication Management that is, it offers a timely and global perspective on how our profession is adapting and in many cases struggling to keep pace as artificial intelligence continues its rapid integration into our daily work. As AI tools become embedded in the workflows of communication professionals around the world, a new survey from the Global Alliance offers a revealing snapshot
of where our profession currently stands and where it may be falling short. The report titled Reimagining Tomorrow, AI and PR and Communication Management draws on insights from nearly 500 PR and communication professionals. The findings paint a picture of a profession that’s enthusiastically embracing AI tools, particularly for content creation, but falling short when it comes to strategic leadership, ethical governance, and stakeholder communication. While adoption is high,
91 % of respondents say they’re using AI. The report highlights a striking absence of strategic leadership. Only 8.2 % of PR and communication teams are leading in AI governance or strategy, according to the report. Yet professionals rank governance and ethics as their top AI priorities at 33 % and 27 % respectively. Despite this, PR teams are mostly engaged in tactical tasks.
such as content creation and tool support. This gap between strategic intent and practical involvement is critical. If PR professionals don’t position themselves as stewards of responsible AI use, other functions like IT or legal will define the narrative. This has implications not only for reputation management, but for organizational relevance in the comms function. Now, in a post on his blog last week, our friend Stuart Bruce
describes the findings as alarming, arguing that communicators are failing to lead on the very issues that matter most, ethics, transparency, stakeholder trust, and reputation. His critique is clear. If PR doesn’t step up to define the response of the use of AI, we risk becoming sidelined in decisions that affect not just our teams, but the wider organization and society. The Global Alliances report also shows that while AI is mostly being used for content creation,
Very few are leveraging its potential for audience insights, crisis response, or strategic decision making. Many PR pros still don’t fully understand what AI can actually do, Stuart, either tactically or strategically. Worse, some are operating under common myths, such as avoiding any use of AI with private data, regardless of whether they’re using secure enterprise tools or not. So where does this leave us? Well, it looks to me like somewhere between a promise and a missed opportunity.
How would you say it, Joe?
Shel Holtz (46:21)
it is a missed opportunity so far as far as I am concerned. And I have seen research that basically breaks through the communications boundary into the larger world of business that says, yes, there’s great stuff going on in organizations in terms of the adoption of AI, but there is not really strategic leadership happening in most organizations. Employees are using it.
There are a growing number of policies, although most organizations still don’t have policies. Most organizations still don’t have ethics guidelines, although a growing number do. There are companies like mine that have AI committees, but the leadership needs to come from the very top down. And that’s what this research found isn’t happening. I was just scrolling through my bookmarks trying to find it. I’ll definitely turn that up before the…
show notes get published, if it’s not happening at the leadership levels of organizations, it’s not happening at the leadership levels of communication, I certainly can see that in the real world as I talk to people. It’s being used at a very tactical level, but nobody is really looking at the whole overall operation of communication in the organization, the role that it plays and how it goes about doing that.
through that lens of AI and how we need to adapt and change and how we need to prepare ourselves to continue to adapt and change as things like VO3 are released on the market and suddenly you’re facing a potential new reputational threat.
@nevillehobson (48:07)
Lots to unpack there. It’s worth reading the report. It’s well worth the time.
Shel Holtz (48:12)
Hey, Dan, thank you for that great report. Yeah, I had to wipe a tear away as well over the passing of Skype. You’re right. It was amazing as the only tool that allowed you to do what it could do. And as we have mentioned here more than once in the past, it is the only reason that we were able to start this podcast in the first place without Skype. You were in Amsterdam at the time.
And for you and I to be able to talk together and record both sides of our conversation, Skype was the reason that we could do that. The only other option would have been what at the time was an expensive long distance phone call with really terrible audio. Who knew the double ender back in those days? We could have done it. You realize we could have both recorded our own ends. It would have taken forever to send those files.
@nevillehobson (49:02)
Yeah.
Shel Holtz (49:09)
back then because the speeds were.
@nevillehobson (49:11)
It would have been quicker
burning them to a CD and sending it by courier, I would say.
Shel Holtz (49:15)
Yeah,
no kidding. So bless Skype for enabling not just us, but pretty much any podcasters who were doing interviews or co-host arrangements. Skype made it possible, but Skype also enabled a lot of global business. There were a lot of meetings that didn’t have to happen in person. I mean, you look at Zoom today, Zoom is standing on the shoulders of Skype.
@nevillehobson (49:39)
Yeah, it actually did enable a lot. You’re absolutely right. I can remember to you remember this, of course, back in those days when both of us I think we were both of us were independent consultants. So, you know, pitching for business securing contacts and following up and all that was key. We had what what Skype called Skype out numbers that were regular phone numbers that people could use like a landline and that we get forwarded through to Skype by
wife’s family in Costa Rica, she used Skype to make calls all the time that replaced sending faxes, which is how they used to communicate because that was cheaper than international phone calls at that time. ⁓ lots happened in that time. But in reality, it’s only 20 years ago. It sounds a lot. But all this has happened in a 20 year period. And Skype ⁓ was the catalyst for much of this. They laid the foundation for
teams that we see now, Zoom, Google Meet, all those services that we can use. So what happened to WebEx and the like? It seems to have largely vanished, what I can see. So we’re used to all this stuff now. But it was great starter for us. And Dan mentions.
Shel Holtz (50:55)
Yeah, I had a Skype out. My Skype
out number I got, it was my business number and I got a 415 area code because that’s San Francisco and nobody knew the 510 area code in the East Bay outside of the Bay Area. So it provided just that little extra bit of cache. Oh, a San Francisco number. I mean, there was just so much good that came out of Skype. They kept coming up with great features and great tools even after Microsoft bought it.
@nevillehobson (51:17)
Yeah.
They did.
Yeah. And the price, the pricing structure was good. At that time I had, I had business in on the East coast in the U S and I had a New York number. So, uh, yeah, it was, was super, but, so good to, to have a reminisce there with Dan. That was great. Um, I was intrigued by your element about Bridgie Fed, which, uh, I’ve been trying to use that since it emerged.
Shel Holtz (51:25)
So.
That’s great.
@nevillehobson (51:53)
with Blue Sky, but also with Ghost, which has enabled a lot of this connectivity with other servers in the Fediverse. And so I’ve kind of got it all set up. But no matter what I do, it just does not connect. And I haven’t figured out why not yet. So you’ve prompted me to get this sorted out, because it’s important. I’ve got my social web address, and it was enabled by Ghost, that works on Mastodon.
and it enables Blue Sky to connect with Mastodon 2. It’s really quite cool, but Bridgifed’s key to much of that functionality. maybe it’s just me. I haven’t figured it out yet. There could be. So this is definitely not yet in the mainstream readiness arena quite yet, but this is the direction of travel without any doubt. And I think it’s great that we eliminate these, you know, activity pub versus AT protocol.
It just works. No one gives a damn about whether you’re on a different protocol or not. That’s where we’re aiming for. And that’s what is actually we’re moving towards quite quickly. Not for me, though, until I get this work.
Shel Holtz (53:04)
One protocol will win over another at one point or another. It always does.
@nevillehobson (53:07)
It’s like, yeah,
Betamax and VHS, you know, look at that.
Shel Holtz (53:12)
Yep.
And that’s the power of marketing because Betamax was the higher quality format. Well, let’s explore a fascinating and entirely predictable phenomenon that’s emerging in the corporate world. Companies that enthusiastically laid off workers to replace them with AI are now quietly hiring humans back.
@nevillehobson (53:16)
Yes, right, right.
Shel Holtz (53:35)
This item ticks a lot of boxes, man. Organizational communication, brand trust, crisis management. Let’s start with the poster child for this phenomenon. Klarna, the buy now pay later company. CEO Sebastian Simitowski became something of an AI evangelist, loudly declaring that his company had essentially stopped hiring a year ago, shrinking from 4,500 to 3,500 employees through what he called natural attrition.
He bragged that AI could already do all the jobs that humans do and even created an AI deep fake of himself to report quarterly earnings, supposedly proving that even CEOs can be replaced. How’d that work out for him? Just last week, Semitkowski announced that Klarna is now hiring human customer service agents again. Why? Because as he put it, from a brand perspective, a company perspective, I just think it’s so critical.
that you are clear to your customer that there will always be a human if you want. The very CEO who said AI could replace everyone is now admitting that human connection is essential for brand trust. It isn’t an isolated case. We’re seeing this pattern repeat across industries, and it should serve as a wake-up call for communications professionals about the risk of overly aggressive AI adoption without considering the human element. Take Duolingo, which had been facing an absolute
firestorm of social media after CEO Louis Vuitton announced that the company was going AI first. The backlash was so severe that Duolingo deleted all of its TikTok and Instagram posts, wiping out years of carefully crafted content from accounts with millions of followers. The company’s own social media team then posted a cryptic video. They were all wearing those anonymous style masks saying Duolingo was never funny.
We were. And what a stunning example of how your employees can become your biggest communication crisis when AI policies directly threaten their livelihoods. All this is particularly troubling from a communication perspective. These companies didn’t just lose employees, they lost institutional knowledge, creativity, and human insight that made their brands distinctive in the first place. A former Duolingo contractor told one journalist that the AI-generated content is very boring.
while Duolingo was always known for being fun and quirky. When you replace the humans who created your brand voice with AI, you risk losing the very thing that made your brand memorable. But here’s the broader pattern we need to understand. According to new research, just one in four AI investments actually deliver the ROI they promise. Meanwhile, companies are spending an average of $14,200 per employee per year just to catch and correct AI mistakes.
Knowledge workers are spending over four hours a week verifying AI output. These aren’t the efficiency gains that were promised. Now, I firmly believe those are still coming, those gains, and in a lot of cases, they’re actually here now. Some organizations are realizing them as we speak, but we’re not out of the woods yet. From a crisis communication standpoint, the AI layoff rehire cycle creates multiple reputation risks.
There’s the immediate backlash when you announce AI replacements. We saw this with Klarna and Duolingo and others. Employees and customers both react negatively to the idea that human workers are disposable. Then there’s the credibility hit when you quietly reverse course and start hiring people again. It signals that your AI strategy wasn’t as well thought out as you claimed. And that sort of trickles over into how much people trust your judgment and other things that you’re making decisions about.
For those of us working in communication, this trend highlights some critical lessons. Stakeholder communication about AI needs to be honest about limitations, not just potential and benefits. Companies that over promise on AI capability set themselves up for embarrassing reversals. Klarna CEO went from saying AI could do all human jobs to admitting that customer advice, customer service quality suffered without human oversight.
Second, employee communications around AI adoption require extreme care. When you announce AI first policies, you’re essentially telling your workforce they’re expendable. The Duolingo social media team’s rebellion shows what happens when you lose internal buy-in. Your employees become your critics, not your champions. And brand voice and customer experience are fundamentally human elements that can’t be easily automated.
Companies struggling most are those that tried to replace creative and customer facing roles with AI. Meanwhile, companies succeeding with AI are using it to augment human capabilities, not replace them entirely. The irony here is pretty rich. At a time when trust in institutions is at historic lows, companies are discovering that human connection and authenticity matter more than ever. You can’t automate your way to trust. So.
What should communication professionals take away from this ⁓ AI layoff rehire cycle? Be deeply skeptical of any AI strategy that eliminates human oversight in customer facing roles. Push back on claims that AI can fully replace creative or strategic communications work. And remember that when AI initiatives go wrong, it becomes a communications problem that requires very human skills to solve.
The companies getting all this right are the ones that view it as a tool to enhance human capabilities, not replace them. The ones getting it wrong are learning an expensive lesson about the irreplaceable value of human judgment, creativity, and connection.
@nevillehobson (59:32)
Yeah, it got me thinking about ⁓ the ⁓ human bit that doesn’t get this, which typically a leader is an organization, but actually not necessarily at the highest level. I’m thinking in particular of companies, I’ve had a need to go through this process recently, who replace people at the end of a phone line in customer support.
with a chat bot typically as the first line of defense. And I use that phrase deliberately. It defends them from having to talk to a customer where they have a chat bot where it guides you through carefully controlled scripted scenarios that it does have a little bit of leeway in its intelligence to respond on the fly to a question that’s not in the script, as it were, but only marginally. And so you still have to go through a system
that is poor at best and downright dangerous at worst in terms of trust with customers. your point, I agree totally, kind of fosters a climate of mistrust entirely when you can’t get to human and all you get is a chat bot and sometimes a chat bot that can actually engage in conversation. There are some good ones around.
But my experience recently with an insurance company to an accident, car accident I had in December, a guy drove into my car, repaired, and I’m chasing the other party to reclaim my excess. And boy, that’s an education in how not to implement something that engages with people. So, but I don’t see any sign of that changing anytime soon.
So one thing I take from this show, everything you said, indeed what we discussed in this whole episode so far in this context, it’s a people issue, not a tech issue completely in terms of how these tools are deployed in organizations. The CEO at Klana, I was reading about the CEO of Zoom who deployed an avatar to open his speech at an event recently.
⁓ I just wonder what were they thinking to do all these thing
05/24/25 | 0 Comments | FIR #466: Still Hallucinating After All These Years
FIR #462: Cheaters Never Prosper (Unless They’re Paid $5 Million for Their Tool)
-
FIR #462: Cheaters Never Prosper (Unless They're Paid $5 Million for Their Tool)
A Columbia University student was expelled for developing an AI-driven tool to help applicants to software coding jobs cheat on the tests employers require them to take. You can call such a tool deplorable or agree with the student that it’s a legit resource. It’s hard to argue with the $5 million in seed funding the student and his partner have raised. Also in this long-form monthly episode for April 2025:
- How communicators can use each of the seven categories of AI agents that are on their way.
- LinkedIn and Bluesky have updated their verification programs in ways that will matter to communicators.
- Onboarding new talent is an everyday business activity that is in serious need of improvement.
- A new report finds significant gaps between generations in the PR industry when it comes to the major factors impacting communication.
- Anthropic—the company behind the Claude LLs—warns that fully AI employees are only a year away.
- In his Tech Report, Dan York explains how Bluesky experienced an outage even though they’re supposed to operate under a distributed model.
Links from this episode
- LinkedIn post on rumored OpenAI-Shopify integration
- I got kicked out of Columbia for building Interview Coder, AI to cheat on coding interviews
- How To Onboard Digital Marketing Talent According To Agency Leaders
- Exclusive: Anthropic warns fully AI employees are a year away
- AI: Anthropic’s CEO Says All Code Will Be AI-Generated in a Year
- Hacker News on Anthropic Announcement
Links from Dan York’s Tech Report
The next monthly, long-form episode of FIR will drop on Monday, May 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email .(JavaScript must be enabled to view this email address).
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville Hobson: Greetings everyone, and welcome to for immediate release episode 462, our monthly long form edition for April, 2025. Neville Hobson in.
Shel Holtz: I’m Shell Holtz in Concord, California in the us. We’re thrilled to be back to tackle six topics that we think communicators and others in business will find interesting and useful.
Before we jump into those topics, though, as usual, in our monthly episode, we’d like to recap the shorter episodes that we’ve recorded since the last monthly, and we’re. Neville over. I think we’re,
Neville Hobson (2): yeah, I think we are. Shell, uh, episode 4 56. That was our March monthly recorded on the 24th of, or rather, published on the 24th of March.
Um, a lot of topics in that one, they addressed variety of issues. Uh, for instance, uh, publishing platform ghost enabling the social web by employees quitting [00:01:00] over poor communication in companies, the UK newspaper launching AI curated news. And there were three or four other topics in there too. Plus Dan York’s tech report as usual.
So that’s a mighty episode. And.
Shel Holtz: We did on the topic of whether artificial intelligence will put the expertise of practice by communicators at risk. Julie MayT wrote, it’s not about what we do anymore, but how we think, connect and interpret. Human value isn’t disappearing. It’s shifting, isn’t it? The real opportunity isnt doubling down on creativity, context and emotional intelligence by communicating with kindness and empathy.
Looking forward to tuning in. And Paul Harper responded to that comment saying, my concern is that AI, for many applications completely misses emotional intelligence, cold words, which are taken from the web, which does not discriminate between good and bad sources, truth or fake. And Julie responded to that saying, good point, Paul.
When it comes to important [00:02:00] stuff where it really matters whether AI is giving us something real or fake, I usually ask for the source and double check it myself. Chachi PT also has a deep research function that can help dig a bit further.
Neville Hobson (2): Okay, so our next 1, 4 57 that was published on the 28th of March.
And this I found a, a really interesting discussion, very timely one, talking about communicating the impacts of Mr. Trump’s tariffs. And we talked about that at some length. Our concluding statement in that episode was communicated should counsel leaders on how to address the impacts of those tariffs.
And I believe we have a comment on that show
Shel Holtz: from Rick Murray, uh, saying So true business models for creative industries are being turned upside down, revenue and margin streams that once fueled agencies of all types don’t need to exist now and won’t exist in three years.
Neville Hobson (2): Well said Rick. Well said 58, which we recorded or published on the 3rd of April.
This was, I thought, a [00:03:00] really interesting one, and we’re gonna reference it again in this episode. This was about preparing managers to manage human AI hybrid. Teams, um, a lot of talk about that and that how, uh, uh, uh, that we are ready or not for this, it’s on the horizon. It’s coming where we will have this in workplaces, and we talked about that at some length in that episode.
Uh, looking at what it means for managers and how far businesses from, uh, how far it is from enabling their managers to succeed in the new work reality. We also added a, a kind of a, a mirror or a parallel element to this, that it’s also helping employees understand what this means to them in the workplace if they got AI colleagues.
So, um, I don’t think we had any comments to that one. She, but it’s got a lot of views, so people thought about that, just didn’t, didn’t have any comments at this point, but great topic. Uh, I think
Shel Holtz: left, left them speechless if we did.
Neville Hobson (2): Yeah, exactly. So, uh, maybe we’ll get some after this episode in nine that we publish on the 9th of April that [00:04:00] looked at how AI is transforming content from passive to interactive.
We discussed the evolving landscape of podcast consumption, particularly in light of Satya Nadal, the CEO of Microsoft, his innovative approach to engaging with audio content through ai. So not listening to the podcast, he has his, uh, chat bot of, uh, his favorite chat bot, not chat, GBT of course, it’s co-pilot that, uh, talks to the transcript and ge he engages that way.
Interesting. Uh, I’ve seen comments elsewhere about this, that, that say, why on earth do you wanna do this? But you can listen. Well, everyone’s got different desires and wishes in this kind of thing. Uh, but it seems to me a feasible thing to do it the, for the reasons he describes why he’s doing it. And I believe it attracted a number of comments.
Did it not show.
Shel Holtz: We did, starting with Jeff Deonna, who wrote, to be honest, I find this approach deeply disrespectful to podcast hosts and their guests. It literally silences their human voices in favor of a fake conversation with a solace [00:05:00] algorithm. Now, I responded to that. I thought that Cliff notes would be a reasonable analogy.
People rather than reading Silas Marner, uh, read the Cliff notes where some solace Summarizers outlines the story and tells you who the key characters are so that you can pass a test and it silences the voice of the author, author. And yet we didn’t hear that kind of objection to Cliff Notes. We’ve heard other objections.
Of course, you should read the whole damn book. Right? But I think people have been summarizing for years. Executives give reports to their admins and say, write me a one page summary of this. And now we’re just using. AI to do the same thing. I don’t know if you had any additional thoughts on Jeff’s comment.
Sure.
Neville Hobson (2): I left a comment to his, uh, comment. I just reply to his comment as well, saying that, uh, I didn’t say these words, but effectively it was a polite way of saying I disagree. Sorry, you’re not right with this for the reasons you’ve, you’ve outlined. I don’t have the comment open on my [00:06:00] screen now, so I can’t remember the exact words I used, but I thought I couldn’t let him get away with, with that, without a response.
Shel Holtz: Well, we had another comment from Kevin Anselmo, who used to do the Higher Education podcast on the FIR Podcast Network. He said, I asked chat GPT to summarize your podcast transcript. After receiving the below chat, GPT provided practical advice on actioning the takeaways in my own projects. Interesting exercise, and I will not read everything he pasted in from chat GT’s analysis of the transcript of our podcast.
But I’ll, I’ll tell you what the five key takeaway labels are. Transcripts are becoming essential. A ai AI makes podcasts interactive. Most people still prefer passive listening. AI is going multimodal. And then there’s a notable quote from the podcast, so that was, uh, turnabout. I mean, we’re talking about what would happen if people didn’t listen to the authentic voices.
Well, you know, Kevin didn’t have to listen to us. I’m fine with that. If he [00:07:00] walks away with actionable items based on hearing or reading a summary of our transcript, one more way to get to it. I agree. And Mark Hillary wrote, why would you need a transcript for chat GPT though? Just feed it the audio and it could work out what is being said.
Anyway, I.
Neville Hobson (2): Yeah, I replied to him as well. We had quite an interchange. I can’t remember if it was on LinkedIn or on on Blue Sky, I can’t remember which, which service now. Um, but um, he was gonna go and experiment himself with something else. Uh, ‘cause what he described, and someone else was left to comment about this as well.
Actually, I think that was on Blue Sky too, that, um, talked about, uh, you know, why would you wanna do this a bit bit like GE actually, not like Jeff. It wasn’t just alleging disrespect, it was saying, why would you wanna do this? Um, when I, you know, it was actually Mark who said he’d uploaded an MP three. And, uh, it had done the job.
It actually hadn’t, uh, chat. GPT got the MP three, created the transcript from it, and then it did what it [00:08:00] needed to do. So the transcript is essential to.
Shel Holtz: Whether you created Issa. Nevertheless,
Neville Hobson (2): these, these, yeah, these, these great comments are, are fab to have these I must have been extends the conversation.
Okay. So then four 60, which we published on April the 14th. This one talked about layoffs in the United States primarily, and the return of toxic workplaces and the big boss unquote era. Uh, the tide is turning. We started off and assessed that I mentioned. We’re seeing not, not the same and not layoffs per se, but people quitting here in the UK for different reasons.
But this turmoil in this and toxicity in the workplace is part of the reasoning. So we explore the reasons behind the layoffs in the US are the impact of CEO Tough talk and how communicators can help maintain a strong non-toxic workplace. So that was good. We have comments too, don’t we?
Shel Holtz: We do.[00:09:00]
Starting with Natasha Gonzalez who says something that stood out for me was a point that Neville made about employees in the UK who are resigning from jobs due to toxic workplace culture, rather than being laid off as in the us. I imagine this isn’t unique to the uk. And then Julie MayT, who was the first comment she’s going to bookend our comments, wrote that organizations in the US are starting to see we cracks in psychological safety and trust disappearing.
Then all those folks who keep everything ticking along will start to quietly disengage. It’s up to us, calms people to be brave enough and skilled to say on a wee minute, that message isn’t landing the way you think it is. While the big wigs are busy shouting, spinning, and flexing, it’s us who need to rock up with the calm, clear human communications, no drama, ram, just stuff that makes sense and actually help folks to figure out what the hell is [00:10:00] going on and what to do next.
Neville Hobson (2): Good comment Mr. Bit. And that takes us to the last one before this episode, episode 4 61. We published on the, on the 24th of April that looked at trends in YouTube video two reports in particular that really had interesting insights on virtual influences and AI generated videos. And the bit that caught my attention mostly was, uh, news that every video uploaded to YouTube.
So you take your video, you upload it, um, uh, can be dubbed into every spoken language on the planet, uh, with the, with the speaker’s lips reanimated to sync with the words they are speaking. I mean, this is either terrifically exciting or utter nightmare that, uh, that is approaching fast. So, um, we talked about that and uh, we haven’t had any comments to that one yet, but this is a topic I see I’m seeing quite a bit being discussed online in various places.
So this is just a start of this, I think. [00:11:00] So that takes us to the end of the recap show,
Shel Holtz: so I didn’t see it. Okay. Lemme talk about that.
Neville Hobson (2): And last but certainly not least, I want to mention a new interview that, uh, that we posted on the 23rd of April. This was with Zoa artists in Australia who we interviewed on an article she wrote in the populous blog on bridging AI and human connection in internal communication. It was a really, really good discussion we had with, uh, it’s definitely worth your time listening to this one.
You will learn quite a lot from what or Zoa has to say on this topic. What did you think of it? She, it was good, wasn’t it?
Shel Holtz: It was fascinating and I read that, that post in the popular blog and also was engaged in a conversation with Zuora at the Team Flow Institute where we’re both research fellows and she raised it and it led to a conversation with all the fellows [00:12:00] and this notion of what would a board of directors do if AI was in the room with them right now?
What would they use it for? How would they take advantage of it to some fascinating discussion. So worth a listen. Also up now is episode number 115 of Circle of Fellows, the monthly livestream panel discussion that people who watch live are able to participate in in real time. This was about communicating amidst the rise of misinformation and disinformation.
Brad Whitworth moderated this installment of Circle of Fellows with panelists, Alice Brink, Julie Holloway, and George McGrath. Sue Human was supposed to participate, but woke up feeling ill, but did send in some written contributions that, uh, were read into the discussion. So a good one. I’ve, I’ve listened to it.
You should too. It’s a very timely topic. And just to let you know about the next Circle, circle of Fellows, episode one [00:13:00] 16 is scheduled for noon eastern time on Thursday, May 22nd. The topic is moving to teaching. This is something a lot of communicators do is become adjunct professors or full professors, or even tenured professors.
And we’ll be having a conversation with four IABC fellows who have done just that, Cindy smi, John Clemens, mark Schumann, and Jennifer W. And in fact, I’m speaking at Jennifer W’s class via Zoom pretty soon, so that’ll be a fun one too. You can mark that one on your calendars May 22nd noon eastern time, and that’ll take us to the start of the coverage of our topics for this month, but only after we turn things over to an advertiser for a moment.[00:14:00]
As we have been discussing for some time, AI agents are coming and to a degree they’re already here. Ethan Molik, the Horton professor, and ai, I guess you’d call him an AI influencer. He posted this observation to LinkedIn a few days ago. He wrote, I don’t think people realize how much, even a mildly agentic AI system like chat PT oh three can do on its own.
For example, this prompt works in oh three zero shot. Come up with 20 clever ideas from marketing slogans for a new mail order. Cheese shop. Develop criteria and select the best one. Then build a financial and marketing plan for the shop, revising as needed, and analyzing competition. Then generate an appropriate logo using the image generator and build a website for the shop as a mockup.
Making sure to carry five to 10 cheeses to fit the marketing plan. With that single prompt in less than two [00:15:00] minutes, the AI not only provided a list of slogans, but ranked and selected an option, did web research, developed a logo, built marketing and financial plans, and launched a demo website for me to react to the fact that my instructions were vague and that common sense was required to make decisions about how to address them was not a barrier.
And that’s an open AI reasoning model, not an actual agent. Built to be an agent to take on autonomous tasks in sequence multiple tasks in pursuit of a goal with agents imminent. HubSpot shared a list of seven types of agents in a post on its blog, and I thought it would be instructive given what Professor Mooch wrote to, to go over these seven categories or classes of agents and where they intersect with what we do as communicators.
Now I, I’ll give you the caveat that. Somebody else may develop a different list. Somebody else may slice and dice the [00:16:00] types of agents differently, but this is the first time I’ve seen this categorization, so I thought it was worth going through. They start with simple reflex agents that operate based on direct condition action rules without any memory of anything that you may have interacted with it about before.
So in PR, we could use this for automated media monitoring alerts set up agents that trigger. Instant alerts based on keywords that, uh, appear in news articles or on social media that lets you respond quickly. Uh, you could have some basic chat bot responses, you right, simple chat bots on internal or external platforms that will answer frequently asked questions with pre-programmed answers about things like, I don’t know, office hours, basic company information, dates of upcoming events.
And then you could filter inbound communication, automatically flag or filter incoming emails or messages based on keywords that indicate urgency or specific topics and route [00:17:00] them to the appropriate team member to respond to it. The second type of agent is a model-based reflex agent. These maintain an internal model of the environment to make decisions considering past states as well as what you’re asking it to do right now.
So you could use a contextual chat bot to develop these chat bots for websites or, or internal PO portals that can maintain conversational context. It can remember previous interactions, and then provide more relevant information or support when the employee or the customer comes back for, for a follow-up or for additional information.
Do sentiment monitoring with that, that historical context. Agents that track media or social media sentiment over time can identify trends and, and give you historical context to current conversations. So you know, something’s being discussed around the organization. It can say, well, you know, two weeks ago this conversation happened then that weighs on what’s going on in these [00:18:00] conversations today.
And then there’s automated information retrieval, uh, agents that can access and synthesize information from internal databases or external sources based on what you ask it. Uh, providing more comprehensive answers than you get from the simple reflex agents. Goal-based agents make decisions to achieve a specific goal, planning a sequence of actions to reach that objective.
This is what most of us think about when we’re thinking of agents, automated press release, distribute distribution, social media, campaign management, internal communication, workflow automation. This is all possible here. I think I, I referenced on an earlier episode that I used an agent, a test agent that I think was Anthropic had set up, and I had it go out to my company’s website, identify our areas of subject matter expertise, and the markets we’re in.
Then go out and find 10. Good podcasts with large audiences where we [00:19:00] could pitch our subject matter experts as guests and it would be an appropriate pitch. And I sat back and watched while it did all of these things. So this is what we’ve got coming. Fourth are utility based agents that choose actions that maximize their utility or a defined performance measure considering various possible outcomes.
Uh, we can use these to optimize communication channel usage, right? Analyze how audiences engage across different communication channels and recommend the most effective platforms for specific messages or, uh, desired reach or desired impact. I can use this for crisis communication, simulation and planning.
Personalized communication delivery. Fifth is learning agents that improve their performance over time by learning from their experiences. You can use this to refine your message targeting, to improve, uh, the, the natural language understanding of chatbots that are engaging with customers or employees or whoever.
And to predict [00:20:00] communication effectiveness. They can analyze a number of factors like message, content, timing, audience demographics. To predict the potential reach and impact of your communications, letting you make adjustments. Sixth are hierarchical agents that break down complex goals into smaller, more manageable sub goals.
Here you’ll have higher level agents overseeing the work of lower level agents, so you’ll have a human manager managing an AI agent who manages AI agents. These for large scale communication projects, multi-channel campaigns, and and streamlining the approval process or use cases. And finally, there are multi-system agents.
These are multiple agents interacting with each other to achieve a common goal or individual goals. Integrated communication, planning and execution. Managing online reputation with agents, monitoring different online platforms, analyzing sentiment, coordinating responses or engagement based on a unified strategy, and then [00:21:00] cross departmental communication coordination.
So we need to understand the distinct capabilities of these different types of agents, and if we do, we’ll be able to leverage them to automate, to gain deeper insights, to do better personalization and better achieve our objectives. And I think, I think this is also a, a, a good point to mention. I have not had a chance to, to read it because you said you saw it and commented on it today.
It’s still early here where I am. But Zora Artis, our interview guest posted something that kind of fits in here too, right?
Neville Hobson (2): Yeah, she shared a post from LinkedIn, which I found quite intriguing. Uh, written by, uh, Jade Beard Stevens, who’s the Director of Digital and Social Innovation at YMU in London. Brief post, but it says it all, I gotta read it out.
It’s quite, quite short. Uh, she says I wasn’t shocked, but still had to share. This rumor has it that open AI is quietly working on a native Shopify checkout. Inside chat. GPT apparently leaked code shows Shopify checkout, [00:22:00] URL Buy Now product offer ratings. No redirects, no search, just chat compare and buy in one flow.
If this happens, Google, TikTok, even product pages as we know them are all about to change. This isn’t just another e-commerce update. This is the merger of search and checkout. This is AI becoming the new storefront. Brands will need to optimize for AI’s first visibility, not just SEO. This could be bigger than TikTok shop, and it’s already happening.
Now, is this a agent ai? I don’t know. Shell, it’s, it’s, it’s kind of fits somewhere in, in this overall picture of, uh, tools, emerging methods emerging. Uh, look at the seven things you, you read out. Uh, there’s some real interesting stuff in there to, to deep dive into, but what Jade mentions is definitely something to pay attention to, even if you’re not in retail or in e-commerce or any of that.
There’s a huge, not huge kind of developing conversation on Reddit about this, which has some more, in more detail on what’s happening. I did a quick search on [00:23:00] this. This is generally this topic to see, you know, anything else talking. I did find something, which isn’t this, this is gonna replace this other thing that I found, I think, which is a Shopify AI chatbot via chat, GPT as the title of the app goes, uh, put out by, um, uh, not, not Shopify beg, pardon?
Shockly. A company called Shockly that, uh, builds, uh, tools to, for, for vendors on Shopify to, to sell their stuff. This isn’t it, but this has been around since September of 2024, and it is actually quite interesting. It’s an app you install. I see it’s got, uh, just under 30, uh, ratings, all five out of five stars from vendors.
Um, it is all to do with, uh, enabling your whole, uh. Storefront using a, a tool from chat chat, GPT. What, um, Jade’s article talks about is this sort of [00:24:00] thing happening natively within Shopify. So that’s a slightly different proposition, but something like this is coming, so you’ve already got third party apps doing this.
Now you’re gonna have a native app doing this. And if it is, um, well, I don’t wanna get hung up on the word digic here, but if, if this is, uh, uh, enables you to, to complete the whole buying process, from interest to purchase, to signing up and paying for it all within chat GPT, that will, uh, a appeal to quite a few people.
I think if it’s offered something better, faster, or less stressful, less hassle, easier than doing it otherwise in, in, uh, in Shopify, it’ll attract attention. So add this one to the list of things to pay attention to as well.
Shel Holtz: Yeah, and whether that’s part of an agent or not, I think depends. It could absolutely be, uh, I could see how that would work in an agent tech environment.
I’m thinking of giving the, the agent the [00:25:00] assignment of buying me a new mirrorless camera, as long as I provide it with the criteria, my price limit of the features that it needs to have, how soon it can be delivered, which brands I don’t want you to consider, uh, but go out and do comparisons of the different models, uh, from different manufacturers that meet my criteria.
Then do price comparison to find the best price. Once you have found the best price, buy it and have it delivered so that I don’t have to do anything else. That’s an agent. So again, you know, if there’s price at the end, what can communicators do with that? I don’t know how much the PR folks can do with that, but the marketing side of the house can probably do a ton with that.
Neville Hobson (2): Yeah. So one more to pay attention to. I was looking through the HubSpot article you referenced, and I, it’s a couple things in there that I, that struck me, uh, their views. Uh, one where they talk about under the, uh, autonomous AI agents paragraph, it’s always a good idea to keep a human involved in any AI operation.
Absolutely [00:26:00] agree with that. Um, a lot of very useful, uh, information in HubSpot’s piece. Uh, some good explainers of what some of this stuff means. And then, um, uh, the answer to the question about preparing for an agent, ai future experimenting. I think the concluding sentence is probably the kind of, okay.
Summarize the whole thing into this. The future is agent. Will you be ready now? That’s what we asked in 4 58 when we talked about this topic, and I wonder if we’ll be asking it again after this one. We’ll see.
Shel Holtz: Undoubtedly we’ll be asking this for some time because even after the agents. Have fully arrived and are available.
Uh, I think there’s going to be a lot of people in our profession and across industry who are not ready
Neville Hobson (2): opportunity for.
Shel Holtz: And we’ll talk about that more when we cover another story later.
Neville Hobson (2): We will. Yeah. So let’s take a look at something quite interesting that popped up in the last few days. [00:27:00] Imagine an AI tool that promises to help you cheat on everything from job interviews to academic exams.
That’s exactly what clearly offers. Created by two former Columbia University students, Chung and Roy Lee and Neil Han Mugham clearly acts as an invisible AI assistant that overlays realtime support onto any application a user is running. It gained attention and controversy after Roy Lee was suspended from Columbia for using an early version during a job interview.
Despite this, clearly has just raised $5.3 million in funding from investors promoting its vision of true AI maximalism, where AI can assist in any life situation without detection. The tool is designed to be undetectable, providing realtime suggestions during interviews, exams, writing assignments, and more, much like an augmented reality layer.
But for conversation and tasks, supporters argue it could level the playing field for those who struggle with traditional [00:28:00] assessments, but critics warn it crosses a serious ethical line, potentially devaluing qualifications and undermining trust in recruitment and academic credentials. Realtime interview assistants raises questions, not just about competence, but about honesty and disclosure.
Rarely happens. Interestingly, the Verge tested it. Their real world testing found that clearly is still very rough around the edges. Technical issues, latency and clunky interactions make it more proof of concept than polished products, at least for now. And did I mention they just got over $5 million in investor funding?
The founders defend the provocative framing. They describe cheating as a metaphor for how powerful AI assistance will soon feel. Much like the early controversies over calculators or spellcheck, as they say, not quite the same thing. I don’t think Shel, but so are we looking at the next Grammarly or are we opening the door to a darker future where nobody can be sure what’s real anymore?
So question for you then Shell is what does this tell us about the [00:29:00] blurring lines between assistance and deception in an AI driven world?
Shel Holtz: Well, I think there’s a couple of ways to look at this. I did hear Lee interviewed on Hard Fork. Uh, it was a great interview and he made a couple of points. First of all, he said that having been through these types of interviews, this is, uh, the kind of interviewing you do for a coding job.
That the tests that they give you have absolutely no relevance to the kind of work that you’re doing. You’re gonna do this once for the interview, and then you’re never gonna do it again. So he doesn’t think that helping people. Figure out how to do that particular exercise is, is all that much of a cheat.
But he also said that everybody programs with the help of AI these days and he says it just doesn’t make sense to have any kind of interview format that assumes you don’t have the use of AI to help you code. I absolutely see that point, but on the other hand, I think this is [00:30:00] just one instance of the kind of thing that AI is going to enable.
And there will be times that it can be very problematic, much more problematic than in this case if somebody can cheat on, say their legal exam or their medical exam, then you’ve got a problem. Somebody who’s not prepared to go out there and and operate on you past the boards because they had help from a program that was written to help them cheat and pass.
So it’s the type of thing that society needs to be thinking about and isn’t yet.
Neville Hobson (2): So if I get this right from what you said, Roy Lee thinks it’s okay to cheat in coding ‘cause it’s a stupid question to ask and you’re only ever gonna do it once. So therefore it’s okay to cheat. Meaning you actually pretend you do know how to do this even though you don’t.
I mean, that is bullshit, frankly, truly. Don’t you think?
Shel Holtz: Well, his his point is that, yeah, you, you don’t know [00:31:00] how to do it, but you don’t have to because you’re never going to on the job.
Neville Hobson (2): So don’t, don’t, don’t, don’t even take the exam and don’t apply for that job. That’s what I would say.
Shel Holtz: I guess then you don’t get any jobs, right?
Well, cheating is
Neville Hobson (2): cheating
Shel Holtz: His point is that you’re, well, yeah, it’s cheating. Yeah. But he says his point is that the cheating in this instance isn’t going to affect your ability to do the job. Whereas in other instances, well, I’m still cheating. I’m not defending it. Understand. I’m just telling you what he said.
Neville Hobson (2): Yeah, sure. Yeah. But it’s still cheating. I, I would say, I mean, it is, to me, this is the same as saying, or someone’s a little bit pregnant or, you know, I’m, I’m, I’m, you know, that kind of stupid kind of defensive argument. This is an indefensible situation in my view that
Shel Holtz: of course, it used to be considered.
Neville Hobson (2): Yeah, but no, no, you can’t. You can’t do it by degrees. She, I don’t believe, honestly, I don’t. You are cheating or you are not. And in this case, again, from how you describe what Roy Lee said, effectively it’s saying, well this is a dumb question to ask and [00:32:00] I’m never gonna do this again, so I’ll get this thing to do it for me basically.
And that they won’t know this. That’s the other thing. They do not know this. They think, are you’s a smart guy? This fell, let’s give him the job. What a ridiculous outcome. And the other ones you mentioned in degrees, you know, taking legal exams or, or you know, passing to be a surgeon. Yeah, they’re serious too, but they’re all the same.
They’re cheating. But I then kind of flip a bit by saying that this is society as we are. I’m afraid this is humans doing this. This will be out there. And this makes it even more difficult to know what’s true and what’s not, and who you can trust and who you can’t. So, you know, welcome to the new world there.
Shel Holtz: I think the adaptation that has to happen has to happen on the part of the people conducting the interviews, not the people taking them. And the reason for that is, I mean, if you think about it, it used to be considered cheating to, to bring a calculator into, well, they mentioned that’s
Neville Hobson (2): the argument he gives.
Ridiculous.
Shel Holtz: Yeah. Well, I mean, everybody’s allowed to use a [00:33:00] calculator now because the people that was 60,
Neville Hobson (2): 60 years ago. Yeah. So maybe in 50 years this would be normal. Yeah.
Shel Holtz: Who conduct the tests came to realize that the people who do the work are able to use calculators. So they should have been part of the test all along.
So I think that’s a legitimate argument, not a, not a legitimate argument for cheating, but for updating the testing so that people don’t feel like they need to.
Neville Hobson (2): So in the meantime, that’s not the landscape. So they need to develop it. So maybe the simplest way to do this is send your AI agent in to take the exam for you.
Has that,
Shel Holtz: well, there are people doing that for job interviews. Yeah, of course. They, they’re probably pretty close to that. Yep. We’ve seen some interesting developments recently with two platforms taking different approaches to verification, and I think some of this may be a little backlash to X, where now you can just buy the blue check mark and it doesn’t actually verify anything other than that you pony up the money for it.
But LinkedIn and Blue Sky [00:34:00] have taken steps with their verification programs. Let’s start with LinkedIn, which is allowing verified identities to extend beyond its own platform. This change means your verified LinkedIn identity can now be visible on other platforms designed to enhance trust and transparency across the internet.
The system leverages open standards and cryptographic methods to ensure authenticity and security. What makes this particularly interesting is how it integrates with Adobe’s technology. Adobe’s content credential system is one of the tools supporting this cross-platform verification. So when you verify your identity on LinkedIn, that verification status can essentially travel with you to other websites and services that support these standards, including Adobe’s Behance.
Now, this is a site that helps creators and people who need to hire creators connect. Now, this is a fundamental shift in how verification works rather [00:35:00] than a siloed verification system on each platform. LinkedIn’s embracing an interoperable approach that lets your verified status function as a digital passport of sorts.
Now, while it’s too bad, this isn’t tied directly to the fedi verse protocols, the significance for communications professionals can’t be overstated. As content creation becomes increasingly distributed across platforms, having a verified identity that travels with you simplifies your ability to establish authenticity in multiple spaces.
For organizations managing multiple spokespersons or content creators, this can streamline verification processes considerably. Meanwhile, blue Sky has taken a different but equally innovative approach to verification by introducing a new Blue Check system just last week. Uh, they’re implementing what they call a user-friendly, easily recognizable blue check mark that will appear next to verified accounts.[00:36:00]
The platform will proactively verify authentic and notable accounts while also allowing trusted verifiers select independent organizations that can verify accounts directly. Now, what’s really interesting about Blue Sky’s approach is how it distributes verification authority. Under this system, organizations like the New York Times can now issue blue checks to their journalists directly within the app, and Blue Sky’s moderation team will review each verification to ensure that it is what they say it is.
This creates a more decentralized verification ecosystem rather than putting all verification power in the hands of the platform itself. Blue Skies verification system has transparency built in. Users can tap on someone’s verified status to see which trusted verifier granted the verification. This adds a layer of context that helps users understand not just that the accounts verified.
But who [00:37:00] vouched for it? Now, before this update, blue Sky had been relying on a domain based verification system letting users set their website as their username. For example, NPR .(JavaScript must be enabled to view this email address) and US Senators verify their account with their senate.gov domains. This method is gonna continue alongside the new blue check mark system, and this gives users multiple ways to establish authenticity.
Now, the evolution of these verification systems comes at a critical time with scammers and impersonators on the rise. A recent analysis found that 44% of the top a hundred most followed accounts on blue sky had at least one doppelganger account attempting to impersonate them. For those of us working in organizational communication, these developments signal a series of important trends.
First. Verification is important and it’s becoming distributed and contextual rather than a single authority declaring who’s authentic. We’re moving toward [00:38:00] ecosystems where multiple trusted entities can vouch for identity. Second Cross platform verification is emerging as a solution to digital fragmentation.
LinkedIn’s approach particularly shows how verified identity could function seamlessly across digital spaces rather than being siloed within individual platforms. Third, transparency about who is doing the verifying is becoming important. Blue Sky’s approach of showing which organization verified an account recognizes that the source of verification matters almost as much as the verification itself.
For organizations, these trends suggest that we really ought to be thinking more holistically about verification strategies. Rather than just get verified on each individual platform, we are really gonna need to start thinking about establishing verified digital identities that can travel with our content and our spokespersons across the net.
Neville Hobson (2): Very interesting development. So I [00:39:00] hadn’t familiarized myself much with the LinkedIn one, but that’s e equally very interesting. Uh, blue Sky though, to me is definitely moving ahead in a very interesting area. Unlike XI think you mentioned Shell, but some people are seeing this as like a slap in the face to Musk.
That’s probably a very tangential way down the, the priority list, but yes, I bet they are. But I found it most interesting the way in which they’ve gone about this in terms of the, the levels of verification. You’ve got your little blue check mark looking slightly differently depending on the verification system.
And by the way, this is, I think it’s a smart move to follow the blue check, although technically it’s not a blue check, it’s a white check in a blue background, but whatever people call it a blue check mark because, uh, it’s familiar thanks to Twitter as was and the who. Trashed it completely. ‘cause the only verification means you’ve paid Musk so many dollars a month fee and therefore you verified.
I mean, that’s Twitter’s def or X’s definition of what verification means. No value to it, in my view, shall frankly. But, uh, this though, [00:40:00] I think is, is far more interesting. Particularly the transparency about who has verified you. Um, I’ve used my own domain, a domain I acquired back in 2023 for the purpose of this is to verify my handle by domain.
Neville Hobson xyz, YXYZ. You might ask because that’s because at the time the Metaverse was a big deal. NFTs were hot, and everyone who was, everyone had a domain ending in X, Y, Z. So hey, that’s a bandwagon I’ll jump onto, which I did. So I’m now using it have been, and it’s only used for that purpose currently.
So, um, you can’t request verification. That’s another thing to mention with Blue Sky, uh, it’s not much you are invited. Is that suddenly that you might get a, not from saying they have verified you or one of these other organizations might, if you excuse me. On a domain with your employer, they can verify you.
And there is something equally interesting on this. I’m not quite sure if this is just a sample, it’ll stay around or not. But you can actually verify yourself. I’ve [00:41:00] seen some people doing that. I haven’t done it. So because I can’t see the point. Uh, ‘cause the point of verification to me is trust in someone else has verified you, not you doing it yourself.
So maybe that will disappear or it’ll have some other function, I don’t know. But the transparency thing, according to the screenshots in Blue Sky’s, uh, announcement posts about this are, are great. A very clear so-and-so is verified. Uh, it says this account has been verified. It has a blue check because it’s been verified by trusted sources.
Then it lists who those sources are and the date they perform. The verification adds lots to the trustworthiness that you perceive rather than just some simply say, yep, you verified, you a blue check. If you’re an organization verify, you’ll have a different style check. And these will all become quite familiar.
They, they’re not complicated at all. So I. You are right at what you said earlier, which is about, verification isn’t just a casual thing anymore. You need to have a strategy about who in your organization, if you are a, a large organization in particular, who gets [00:42:00] verified for what purpose by whom, and we’ll see that emerging as this picks up.
But this is a great start. They do say, and this is going back to the domain, you can self verify with a domain. That’s the only thing that makes sense, because to do it, you’ve got to make changes at your registrar in the in DNS settings and, and a few other things. And also engage with blue sky to do this.
So it’s uh, uh, they say during this initial phase, they’re not accepting direct applications, as I mentioned. Uh, but they do say as this feature stabilizes, so I guess all the excitement’s dying down and people see how it’s all working, they’ll launch a request form for notable and authentic accounts interested becoming verified or becoming trusted verified.
So during the course of 2025, we’ll see this develop and maybe, um, uh, maybe it will, uh, become the kind of benchmark standard for verification on social networks like this. So it’s interesting. I.
Shel Holtz: We need a standard and I’d like to see that [00:43:00] standard. Yeah. Integrated with the fedi verse standards, because these all ought to be infra operable.
We, we really ought to be able to share a post in one place where we are verified and have that post show up wherever people have chosen to follow us from and have that verification show up with us. And people should be able to click on that verification and see who vouched for us. Uh, they should be able to see that the spokesperson for my company was verified by me or by the CEO and it all works together.
Neville Hobson (2): I think that will emerge, uh, thinking about this cross. Posting idea that’s in been in place in a couple of places, but it’s very, very flaky. I’m talking about things like, for instance, it’s been for a while, at least a year, if not longer, where a plugin on WordPress lets you publish your post and it will then share it across, across the Fed us via connection, uh, with Mastodon.
And you’ve then got threads doing the same thing, [00:44:00] but they’re not. They require tweaks to your platform. Uh, the, probably the one that shows you, if I can use this phrase again, the direction of travel is ghost. The, uh, the new platform, which I joined the beginning of this year that has just enabled, um, or recently just enabled the ability to share your posts with Blue Sky Now ghost.
Has invested a lot of time, effort, and probably a bit of money too, I think, into its social web offering, which is in beta. And that’s all to do with the activity pub protocol because Blue Sky has a different protocol at Proto yet that works from ghost to blue sky via a bridge. And that’s a little technical and that has got to be just immediate term usage whilst this, this plays out further.
So someone like Ghost is making big inroads into doing, into enabling this kind of thing. And I would say we’re gonna see a lot of activity [00:45:00] during 2025 from Mastodon in particular, as well as people like Ghost and others to connect up these, these disparate elements of the Fedi verse so that we we’re becoming more cohesive.
But it’s gonna take time.
Shel Holtz: Yeah, the fedi verse is, is nascent, but it’s also, I think, inevitable. We’ve been talking for quite some time now about what is the successor to Twitter now that X has become what it has become. And I’m not sure that there is a successor. I think that there are a number of places that people are attracted to.
It could be ghost, uh, could be for its newsletter functionality as much as for its blogging functionality. It could be threads it, it could be blue sky, it could, you know, whatever. But as long as where I am, I can follow who I want to follow and have that appear in the network that I have chosen, I’m good.
So I think this is where things are, are headed inevitably since I think the days of somebody being able [00:46:00] to come along and say, I’m the new 800 pound gorilla of social networking. Everybody’s coming here are over. I.
Neville Hobson (2): Yeah, it’s been apparent like that, that that’s likely to be the case for, for a bit, I believe very much that the time is gone for, for monolithic centralized social networks like Facebook, for instance.
No, this is the time for niche networks. Uh, people can set things up themselves. Uh, it doesn’t matter. You, you’ve got 50 people on there or 50,000 people on there, doesn’t matter. And indeed, the the recent, uh, outage on Blue Sky is a, is an interesting indicator of the fragility of all of this. And, and Dan’s gonna talk about this a bit later in his report, but this is an interesting time.
We’re now, it’s almost like things are maturing, it seems to me. And I think you’re right when you say that, that people aren’t, aren’t so much attracted by the idea of a centralized place where, Hey, we’ve all gotta go here after the experience on X. You’ve got more about people saying, I want to get outta here, where do I go?
So, um, we’re still at that phase, and you’ve got. Something interesting with Trump’s, uh, not Trump Musk’s, uh, GR [00:47:00] network, developing chatbots for it and all this stuff. So that’s something interesting in that area of this. So it’s all at a time for communicators to pay attention closer to what is happening here and the implications of it just as you and I are doing.
And if you don’t wanna do that, that’s fine. Just listen to FIR ‘cause we’ll help you understand it.
Yep. Okay. That’s a really good report, Dan. Thank you. Good topics. You’ve talked about, uh, blue sky. I mentioned just before your report actually the outage was unfortunate, but is it not an indicator precisely of that fragility? I mentioned previously different definitions of decentralization that you mentioned.
I think that’s. Possibly a communication issue because people seem to be latching onto, Hey, it’s decentralized when actually it’s more like, it’s going to be decentralized. ‘cause that’s our aspiration that we’re working towards, which is the case with, with Blue Sky. That’s very good on threads move to.com and web improvements.
I must admit, I, I was a bit yawny about [00:48:00] that. You know, dot net.com. Do I care as a user? Well maybe I should because I then read somewhere else that the move to.com enables meta to do things that they can’t do with ANet domain. And I’m sure you’ll know more about that than me Dam at the internet Society.
Again, interesting developments with what’s happening with all of this. So thanks for the report, Dan. This is really, really a good one.
04/28/25 | 1 Comment | FIR #462: Cheaters Never Prosper (Unless They’re Paid $5 Million for Their Tool)
The Future of Management Is Hybrid: Leading Human-AI Teams in a New Era of Work
Ask coders how they spend their time these days, and they’re likely to tell you they mostly oversee generative AI tools that craft most of the code. The quality of the code LLMs produce has improved dramatically. People who used to craft code from scratch now review and adjust AI chatbot outputs. For all practical purposes, they have become AI managers.
To some extent, that’s a part all managers are destined to play. The arrival of AI agents will catalyze this transition.
Agents—AI systems that act autonomously to carry out multiple tasks in pursuit of a goal—will become part of most managers’ teams. Working alongside human employees, they will complete work in a fraction of the time it takes a human. It won’t be long before hybrid human-AI teams are common in every industry.
Consider healthcare, where an AI agent will draft post-visit follow-up patient care plans, schedule check-ins, send reminders, and flag unusual symptoms in post-visit surveys for review. The human nurse practitioner will review and personalize the follow-up plan, contact patients in need of emotional support or clarification, and make clinical decisions about concerns the AI has flagged.
In financial services organizations, agents will monitor client portfolios, market conditions, and life events, suggesting portfolio rebalancing or financial moves. The human financial advisor will evaluate the agent’s suggestions based on their understanding of client goals, appetite for risk, and emotional readiness, and hold relationship-building meetings with clients.
In law firms, agents will scan contracts for risk clauses, missing terms, or compliance issues, while junior associates will analyze the agent’s highlights, apply legal reasoning to ambiguous cases, and advise on negotiation strategies based on client context.
Even in the industry in which I work—commercial construction—we are likely to see agents monitoring site sensors, drone footage, and safety incident reports in real time to auto-generate daily construction logs, track progress against schedule, and flag potential safety or quality issues. The assistant superintendent on the job will review the agent’s reports, resolve discrepancies (for example, an agent might not consider a weather delay), and make judgment calls on escalating issues or adjusting plans.
These are but a few of the ways agents will alter the means by which work gets done.
Managers Aren’t Ready
The addition of AI agents to teams will have profound implications for how managers manage. Consider the four examples above:
- Healthcare—Healthcare managers will have to align clinical protocols with AI-generated outputs, ensure HIPAA compliance (in the U.S.), and train staff to interpret and override AI recommendations when necessary.
- Financial Services—Managers will need to ensure their advisors understand the limits of AI recommendations, balance automation with trust-building, and train teams to explain AI logic to skeptical clients.
- Law firm—Partners or legal managers will have to assign roles clearly (AI as first-pass filter, human as final reviewer), prevent overreliance on AI in nuanced deals, and maintain audit trails for liability.
- Construction —The manager will ensure the AI is calibrated to real-world conditions, train staff to verify AI data, and build workflows that integrate AI updates into morning planning hurdles.
Countless challenges will face managers AI agents become a routine part of the mix. Scheduling alone will require new ways of thinking, since agents will complete tasks in hours or days instead of the weeks or months it may have taken humans, yet humans will continue to need the same amount of time they always have to complete their assignments. Rethinking how and when things need to get done, accounting for the handoffs between humans and AI agents, will test managers’ skills.
Other challenges managers will face include…
- Defining roles and responsibilities—What do AI agents do and what requires the human touch? There is more to this than just passing out assignments. In many cases, it managers will have to redesign their processes from scratch.
- Oversight and trust—While AI agents operate autonomously, managers will remain accountable for the results they produce, requiring them to monitor AI decisions and intervene when necessary. Managers will also need to establish clear protocols and boundaries for agents’ autonomy (like setting rules for what an agent can decide on its own and what should require human approval).
- Skill gaps and training—To manage AI agents, managers must understand how they work and ensure their team members are trained to work alongside them.
- Maintaining morale and trust—Introducing AI agents to teams is bound to raise employees’ anxiety levels. Managers must proactively address job security fears and demonstrate how AI agents assist but don’t replace the team.
- Performance evaluation and accountability—How do you evaluate performance in a hybrid human-AI team? A rethinking of success metrics is in the cards.more to this than just passing out assignments. In many cases, it managers will have to redesign their processes from scratch.
And all of this must happen while managers retain all their usual people-management duties—including, in some cases, managing remote workers, a challenge to which many managers still haven’t risen.
Redefining the Manager’s Role
Much of the work that occupied managers will shift to AI, especially administrative work like handing out assignments, tracking progress, compiling reports, and making routine decisions. With less busy work, managers should be able to to focus on those aspects of managing that require a human touch, shifting to leading and mentoring, employing soft skills over hard skills, as shown in this chart:

Wharton professor and AI thought leader Ethan Mollick has suggested companies may shift to more fluid, project-based structures where “AI will act as connectors, while middle management will focus more on human-AI coordination.” Instead of managing a rigid all-human team that grinds through its work, a manager might move from project to project as needed, overseeing a flexible AI-human hybrid team.
A day in the life of a manager in an AI-integrated team might involve checking a dashboard of AI agents’ overnight work, consulting with a data AI about market trends to inform strategy, and then spending the afternoon in one-on-one meetings with team members to coach them through challenges and build morale.
Managers may also work closely with technical staff on agent fine-tuning (a bit like managing the “training” of an AI much as they manage the development of an employee). As Anthony Mavromatis at American Express observed, AI is “freeing up (managers’) time and allowing them to focus on the essence of their job”– the creative and innovative aspects that AI can’t do
Empathy, ethical judgment, communication, and adaptability will define great managers in the AI era.
In fact, despite growing consensus (including among employees) that AI could one day take over their managers’ jobs, the manager’s role will become more pivotal as they focus primarily on leadership. If adding AI agents creates a “digital workforce” with humans working together, the manager’s role is to orchestrate this human-AI symphony.
However, too many managers lack soft skills, focused as they are on driving their teams’ work and slogging through hours and hours of administrative tasks. The management training most companies provide their managers has not been updated to account for the transformational role AI will play on teams.
Can Communicators Help?
As a communicator, I don’t look at any business activity without seeing it through the communication lens. Internal communicators who choose to wade into these waters can have an outsized impact on how well their organizations adapt. After all, like so many things digital in organizations, this is as much about change management as it is about the technology. Communicators will get people on board, informed, and comfortable, ensuring everyone understands what’s changing and why.
To do this, communicators need to…
- Set expectations with transparency—Communicate how AI agents will affect workflows, job roles, and day-to-day activities. When employees know the facts, they’re less likely to fear the worst.
- Two-way dialogue to address concerns—Provide channels that allow employees to voice their concerns and get answers.
- Communicate the “why”—A compelling narrative will make it easier for the pivot to go smoothly. Tell the story of why the company is embracing agents. Is it to improve customer experience? To reduce tedious work and free people for creative tasks? Paint the vision of how AI will ultimately benefit the employees and the company.
- Guidance, training, and resources—Let employees know what support is available, and create some of that support in the form of intranet and other easily-accessible resources.
- Highlight success stories and quick wins—Nothing motivates employees to adopt a new behavior like seeing other employees recognized for that behavior.* Set expectations with transparency—Communicate how AI agents will affect workflows, job roles, and day-to-day activities. When employees know the facts, they’re less likely to fear the worst.
- Two-way dialogue to address concerns—Provide channels that allow employees to voice their concerns and get answers.
- Communicate the “why” —A compelling narrative will make it easier for the pivot to go smoothly. Tell the story of why the company is embracing agents. Is it to improve customer experience? To reduce tedious work and free people for creative tasks? Paint the vision of how AI will ultimately benefit the employees and the company.
- Guidance, training, and resources—Inform employees about the support available and create some of that support in the form of an intranet and other easily accessible resources. Highlight success stories and quick wins—Nothing motivates employees to adopt a new behavior like seeing other employees recognized for that behavior.
Internal communication should be the bridge between the technological change and the human side of the organization. When done right, robust internal communication ensures that AI agents are introduced not as mysterious black boxes, but as well-understood new teammates that everyone knows how to work with.
Management in an AI-Driven Workplace
Once human-AI hybrid teams are up and running, organizations will find themselves on a trajectory to a time when they will be flatter and more agile, with managers serving as AI strategists. New definitions of what a “good manager” is will emerge. Looking 5-10 years out, I can envision a workplace where AI agents are ubiquitous, embedded in almost every process. The very definition of a “team” or “workforce” will evolve to include digital entities. The managers who thrive will be those who embrace this future and guide it rather than resist it. They will be lifelong learners, continually adapting as AI evolves. They’ll also be advocates for their team’s humanity – ensuring that technology serves to enhance human potential rather than replace it.
The managers of the AI era won’t be those who can out-calculate a computer but those who can leverage AI to amplify human ingenuity. They will create teams that are not only highly efficient but also creative, resilient, and ready for whatever the future of work brings.
04/12/25 | 0 Comments | The Future of Management Is Hybrid: Leading Human-AI Teams in a New Era of Work
Take My Books. Please.
Over the years, I have authored or co-authored six published books. Every one of them is in LibGen—Library Genesis—the shadow library project that provides unfettered access to written material. Meta used LibGen to train Llama, its open-source generative AI model.
I’m fine with that.
I know a lot of my peers disagree. That’s fine with me, too. I understand their objection. But I do have a different perspective, one that aligns with the philosophy Big AI is promoting. It was my perspective well before I heard Big AI explain it.
Before delving into books, it’s easier to explain the concept with images. After all, graphic designers and artists are equally exercised about image generation models trained on images they created merely by virtue of those images being available on the web.
Patterns, not Plagiarism
By now, I hope most people who use generative AI tools understand how they work. The models don’t include databases of the millions of images scraped from the web. The models “learn” by “seeing” all those images and identifying common patterns. If you see 20,000 images of cats—photographs, paintings, sketches—you learn the patterns that make a cat. When you ask an AI image generator for a picture of a cat, it creates a bespoke image based on the patterns it learned in its training.

Any artist can look at one of those bespoke images and see their own style. This is the source of their distress. However, any aspiring artist can spend hours at an art museum or gallery, studying the styles of the artists whose works hang there. You even see them: They sit on museum benches with sketchpads, copying what they see in order to learn and adapt the styles of the masters displayed on museum or gallery walls.
Oddly, nobody assails these art students, accusing them of intellectual property theft. This has been how artists learn for centuries. Pablo Picasso was influenced by Paul Cezanne’s treatment of form and space. Vincent van Gogh was influenced by Jean-François Millet, especially his pastoral and peasant scenes. Claude Monet? He adopted the style of J.M.W Turner, whose exploration of light and atmospheric conditions inspired Monet’s Impressionist techniques. Paul Gaugin’s bold use of color and simplified forms inspired Henri Matisse. Marcel Duchamp influenced Andy Warhol. Thomas Hart Benton influenced Jackson Pollock. The list goes on and on.
And yet we never, ever, hear anybody complain about the ethical shortcomings of these artists.
Authors Learn from Authors
Writing is no different. George R.R. Martin was heavily influenced by J.R.R. Tolkien and historical fiction writer Thomas B. Costain, and fantasy pioneer Jack Vance. J.K. Rowling drew inspiration from Tolkien, C.S. Lewis, and Roald Dahl for her Harry Potter series. ray Bradbury inspired Stephen King. Simone de Beauvoir influenced Margarete Atwood. My all-time favorite writer, Gabriel Garcia Márquez, was influenced by the surrealism of Franz Kafka and by the style of one of my other all-time favorite writers, William Faulkner.
Even in non-fiction, writers’ styles are influenced by others. Yuval Noah Harari (whose fabulous “Nexus” I just recently finished) was heavily influenced by historians and thinkers like Jared Diamond and Richard Dawkins. Robert Caro (whose “Master of the Senate” I am reading now) read a lot of Barbara Tuchman. The great Brené Brown was inspired by social researchers and psychologists like Carl Rogers and Harriett Lerner.
Artists and writers, then, incorporate the styles of other artists and writers into their work. They don’t have to spend a penny on any of the work they internalize. They can check out books from the library or pay a small fee to enter a museum and drink in the styles of hundreds of artists. As they incorporate elements of what they have seen and read into their own work, they make money from it.
Just like generative AI. The only difference is that a large language model can’t go to a museum or gallery. The art has to be brought to them.
Caveats
If somebody picks up one of my books (even without paying for it), learns from it, and makes some bank based on what they learned, that’s great. If someone queries an LLM and gets an answer that includes information I shared in one of my books, that’s great too, even if the company behind the LLM didn’t pay for it. (The royalty from the sale of one book wouldn’t be enough to take my wife out for dinner at a drive-through.) The LLM isn’t plagiarizing my book (with which I would have a problem). It just learned.
This doesn’t mean I don’t have some issues with the practice of hoovering up every scrap of information available. Thinking about my books, the last one was published in 2008. It’s all outdated. In 2006, my podcasting partner, Neville Hobson, and I had our book, “How to Do Everything with Podcasting,” published by McGraw Hill. Podcasting in 2025 is dramatically different than it was in 2006, just two years after it was introduced. Nobody thinking of starting a podcast should use that book. It would be bad if a query to an LLM produced information from that book.
(If I had a more recent title, though, I wouldn’t feel differently. The fact that some detail from my book might find its way into an LLM’s response would not prevent one person from buying my book. I would be chuffed to learn an LLM incorporated knowledge I contributed in its answer.)
I also sympathize with living artists whose styles are duplicated for original images and agree that the companies behind the image generators should not allow it without some form of compensation.
I have more problems with LibGen than I do with LLMs. The idea that anybody can grab a copyrighted book and read it without paying for it is troubling. If 100 people read one of my books on LibGen, the total royalties actually COULD cover the cost of a fast-food meal for my wife and me.
In every other case, however, I have no problem with an LLM being trained on my copyrighted work. You’re welcome to it.
04/12/25 | 0 Comments | Take My Books. Please.
FIR #456: Does AI Put Communication Expertise At Risk?
-
FIR #456: Does AI Put Communication Expertise At Risk?
It’s not just jobs that AI will affect. It’s the perception that employees have important expertise. After all, if AI can do the work, it’s easy to view employees’ special knowledge and experience as less important to the organization. Neville and Shel examine the steps communicators can take to continue to be viewed by leaders as subject matter experts who expertise brings value to the company. Also in this episode:
- The publishing platform Ghost is enabling technology to embed it in the fediverse.
- New studies reveal that bad communication is leading employees to leave their jobs.
- A national UK newspaper has launched AI-curated news for “time-poor audiences.”
- Unilever is stepping back from its purposeful activities, opting to invest heavily in influencer marketing.
- Have fans of your brand given it a nickname? New research suggests you probably shouldn’t use it.
- Dan York reports on the Internet Engineering Task Force’s work on a way for websites to signal what AI can collect and process.
Links from this episode:
- The Social Web Foundation
- Survey Results: People take Pride in Their Jobs
- Independent launching AI-powered news service for ‘time-poor audiences’
- BBC News to create AI department to offer more personalised content
- How Gen AI Could Change the Value of Expertise
- LinkedIn Skills on the Rise 2025: The 15 fastest-growing skills in the US
- Companies’ biggest barrier to AI isn’t tech — it’s employee pushback. Here’s how to overcome it.
- Farewell Photoshop? Google’s new AI lets you edit images by asking.
- Unilever swaps social purpose for social media as new CEO calls brands “suspicious”
- Why Brands Should Avoid Using the Catchy Nicknames Consumers Give Them
- How nicknames may weaken brands
Links from Dan York’s Tech Report
- IETF’s AI Preferences Task Force (you are welcome to join the mailing list and participate)
The next monthly, long-form episode of FIR will drop on Monday, April 28.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email .(JavaScript must be enabled to view this email address).
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Neville Hobson: Hey everyone, and welcome to for immediate release. This is episode 4 5 6, the monthly long form episode for March, 2025. I’m Neville Hobson in the UK.
Shel Holtz: I’m Shel Holtz in Concord, California. We are delighted that you have chosen to join us for today’s review of really interesting material that has surfaced over the last month in the world of communication, business, and [00:01:00] technology.
We will start as all of our monthly episodes start with a look at the short midweek episodes that we have produced since the last monthly, which was episode number 452, but Neville, we have some comments that predate that episode that have come in, , since that last monthly episode in in February. , the first of these is a comment on episode 4 51 that comes to us from Sally Get who says Verizon Recruiters have a new tactic dangling the remote hybrid work Carrot.
At t is requiring workers to return to the office full-time. Rival Verizon is touting its more flexible opportunities as a way to add top talent to the V team per an email sent to at t employees business in Insider found that, , 1,200 open Verizon roles across the us, , 10 of which are remote and many of which require at least eight [00:02:00] in-office days a month.
But at and t isn’t budging telling Business Insider. It wants people who want to work in team environments with strong relationships and collaboration fostered by an office construct. So this battle over return to office, , and employees who desire to continue to work remote is ongoing.
Neville Hobson: That was a good comment from Sally.
It, it, , makes a lot of sense what she said. , let’s have a quick look at the, , at the episodes we’ve done, including the last monthly, because we got a few comments, right? She, , so we talked about quite a range of things in, , in 4 52, the long form monthly for February, YouTube. Shifting from mobile to tv.
Are we living the age of chaos communication? That’s a big topic, I must admit. , the impact of loosened content, moderation, policies, Gallup report, and what people want from leaders. Any value to AI generated research panels? We asked. It may be the end of the line for LinkedIn hashtags, we pondered and Dan York Tech report, , [00:03:00] Macedon and a few other things in there.
So a pretty big, , discussion field over the course of 90, more than 90 minutes. That one, I think it was Cheryl. And as you mentioned, we got some comments to that.
Shel Holtz: We did two of them, , one from Kristi Goodman who says, I have a note to add to your conversation about changing social channels. My nonprofit had a surprise last week.
We’re on a crazy number of social channels because as you know, it’s important to be where your people are with dwindling followers and engagement. Our plan at the start of the year regarding Twitter X was to maintain our main account, just to monitor it. We’d never advertised there. We expected to walk away soon.
But during a 20 hour state legislative committee that we were part of, advocates and reporters took to Twitter with lots of live tweeting, info sharing and even new followers, 85% of our engagements that day were on Twitter. I honestly don’t know what to think. And then as a bonus, she shared a photo from a few months [00:04:00] ago when Bryan Person drove to Austin for my office holiday breakfast.
He’s been producing IRA’s podcast since it launched in 2025, it says.
Neville Hobson: Yeah. Terrific. Yeah, I saw Christie’s comments on LinkedIn. I think I left a reply , to it. , that picture of Brian’s neat though. He looked, , he looked quite, , alert and alive. Haven’t seen Brian for a while. It’s good to see that I haven’t
Shel Holtz: seen or spoken to him in a while.
I see a comment from him every now and then. , but he was one of the original. He was members of by our audience. He was, the second comment comes from Catherine Arrow who says, hello there, Neville must say it was wildly disconcerting to see myself tagged in your post and then listen to you read and discuss my article on the podcast.
I would’ve happily discussed it with you both and answered some of the questions you had on your mention of the Melbourne mandate. And I think that was actually my mention of the Melbourne mandate. Yes. That’s still up there on the Global Alliance for Public Relations and Communication Management site.
You can find it here. She shares the link, which we will add to the show [00:05:00] notes and that will take you to the old WordPress site, which still has a lot of material on it. It’s old now to the point that it’s almost wearing whiskers, but much of the thinking we did then I was, , the Global Alliance secretary at the time is as relevant as ever in today’s operating environment.
Neville Hobson: Hmm. Great. I did, I think I did respond to her comment as well on LinkedIn that I saw.
Shel Holtz: I believe you did.
Neville Hobson: Yeah. So then, , 4 53, which we recorded on March the fourth. , that’s what, , we discussed some research inspired from, , by Duke University’s for Cure School of Business, , and explored why strategically roasting customers with humor and light-hearted banter can enhance brand loyalty and deepen customer connections.
In 4 54 that we recorded on March the 10th, we broke down the many implications for the practice of pr. The actions required to prepare brands to be targets of the same kind of treatment. Ukrainian [00:06:00] President Zelensky got at the hands of, of the Trump, of Trump vans and the complicit media that infamous White House press conference, and that’s a topic I still see being discussed a lot online.
And then in 4 55, the episode immediately prior to this one we’re recording. We did that on March 17th. We shared our thinking about the advice offered by Lulu Chang, messa founder and CEO at the agency roster in her manifesto, calling on leaders to skip the agency and go direct. In other words, traditional PR is dead again.
We had a good chat about that one. Didn’t we show?
Shel Holtz: We did. And interestingly, I just read a post by Jenny Dietrich talking about how in this very same environment, how important the peso model is and to engage in paid, earned, shared, and owned, , that they all have relevance and I importance. She didn’t mention MEVY at all, , but you could sense that presence there.
Anyway.
Neville Hobson: Yeah. Yeah. [00:07:00] Excellent. , we also did two new interviews, , in the preceding 30 plus days. , the first one, , which was something we were both looking forward to quite a bit. So we recorded it on, we published it on the 26th of February. That was with Steve Ruble, who is a big figure from the early days of social media and a stellar career over almost two decades with Edelman.
And Steve had a lot of insights, , on what we discussed, which broadly speaking, we covered the wide spectrum of artificial intelligence, media analytics, and the future of pr. , it was big and it’s definitely an interview worth, , listening to, or conversation I’d say covered about 40 minutes just over.
, and, , it’s worth the 40 minutes having a listen to. We also had, , a great conversation with, , Sam Michelson, , the CEO and founder of Five Blocks. And that’s, we asked him in the interview, , what the origin of that was. , so listen to the interview and you’ll get that. , [00:08:00] that was a great conversation about how AI search is changing reputation management.
So it was focused particularly on that area. , and , it really was great, how Sam shared his thinking. And we contributed to the overall conversation on how PA AI powered search is changing, , the whole landscape of how reputations are built, managed and perceived online. So we talked about that in some detail and discussed what companies and communicators need to do in that new landscape.
So it was definitely worthwhile. So that’s quite a lot of stuff we published in the last 30 Days show.
Shel Holtz: And we’re prolific, aren’t we? , , and in addition to the interviews, there’s also a new episode of Circle of Fellows up on the FIR Podcast Network. This is the monthly panel discussion, , featuring fellows from the International Association of Business Communicators.
I moderated the panel. It was on ethics in communication that went. Nicely with Ethics Month at I-I-A-B-C. [00:09:00] The the panelists were Todd Hattori, Jane Mitchell, Diane Eski, and Carolyn sel. The March Circle of Fellows is scheduled for this coming Thursday at noon eastern time. That’s March 27th. And this is, , an interesting one.
We’ve never tackled this topic before. It’s working with data in communication. And the panelists are Adrian Ley, Robin McCaslin, Leticia Vez, and Angela Seneca. So if you’d like to tune in, live and participate in that conversation, that’s coming up again, March 27th, this coming Thursday at noon. If you head over to the FIR Podcast Network, you’ll get the link to the YouTube live, , stream.
So hope you can join us for that. And we’re gonna take a short break, , to sell you something and we’ll be back with our stories of the month.[00:10:00]
One
Neville Hobson: of the more significant developments in the world of digital publishing happened last week, and it’s a move that caught the attention of creators, developers, and advocates from more Open Web ghost. The open source publishing platform that powers many independent blogs and newsletters has announced support for Activity Pup, the protocol that connects users and platforms across the Fedi us.
We’ve discussed Activity Pub and the Fedi US in previous episodes of this podcast. It means that every user of the Paid Ghost Pro platform now has the option to publish content on their ghost site that can be followed, shared, and replied to directly from platforms like Mastodon, pixel Fed Peer Tube, and others in the Federated Social web.
Once you’ve enabled the social web beater, your ghost account becomes a fed averse identity, for [00:11:00] example, at you, at your domain. That would be your web, your social web handle. Every post you publish is automatically pushed out as a federated object, and when someone on Mastodon replies to your post, that rep reply should show up as a comment on your blog.
Although I’ve not seen that yet myself, your blog essentially becomes a native part of the Fedi verse. Not just a website you have to visit, but a presence you can follow and interact with from anywhere in the network behind the scenes. This is part of a broader vision from Ghost to make the web more open and interoperable.
They’ve also co-founded a new nonprofit, the Social Web Foundation, with the goal of accelerating adoption of protocols like Activity Pub and pushing forward a decentralized model of content and social interaction. Ghost, CEO, John O. Nolan is one of the founders, and this latest feature release aligns perfectly with that mission.
It is also a clear point of differentiation from platforms like Substack, which operate in a much more closed [00:12:00] ecosystem. In fact, TechCrunch’s headlines said it best. Substack rival Ghost is now connected to the Fati verse that framing is telling. Ghost isn’t just a tool for publishing, is becoming part of a distributed, creator owned web where no single platform owns the relationship between publishers and their audience for communicators and digital, digital strategists.
This is an important moment. It signals a shift in how we think about publishing, reach and engagement. Instead of building audiences within walled gardens, there’s now a viable way to build a presence that is platform independent, but still deeply connected to where conversations are happening. As I wrote in a post on my New Ghost blog last week, I think this move is more than just a technical upgrade.
It’s a cultural signal, a sign that a growing number of people, creators, readers, and developers alike want to return to the principles that made the web powerful in the first place, openness into operability and user control. Indeed, ghost [00:13:00] noted in its announcement. If you’ve been writing things on the internet for a while, you might describe it as the return of the blogosphere.
You’ll know the significance of that. If you were here the first time around. I should mention that ghost newsletters aren’t yet part of the activity pub enabling in the beta only posts on your ghost website. I imagine embracing newsletters will come in the near future. Also, I mentioned earlier that the public beta is available to users on the subscription based hosted Ghost Pro Service Ghost has said that support for Activity Pub on self-hosted Ghost Pro will come with the release of the Ghost version six upgrade later this year.
So let’s dig into what all this means for communicators, for independent media, and for the direction we see social platforms evolving. She, what’s your take?
Shel Holtz: Well, a few thoughts on this. First, , I miss my RSS news reader from the first go around the bloggersphere. That was how we managed to avoid having to go visit each [00:14:00] blog that we followed independently, , to see what was new.
, and I think the fedi verse is kind of like that, but better, , given what’s coming with the ability for comments to move freely, , around the fedi verse, not just your most recent posts. , of course, I. Don’t think that this is the return of the blogosphere because it never went anywhere. It maybe a return to greater awareness and, and more utility of, of the blogosphere.
Yeah. Again, the challenge with the blogosphere and the reason that these walled garden social networks became so prominent is because setting up a blog is work. , and in many cases it’s also money. And a lot of people who felt I would like to share something, didn’t wanna go to that trouble, it wasn’t that important to them.
, or they just weren’t technically able or financially able. And along [00:15:00] comes, Facebook. Suddenly they’re able to share their cat photos and whatever’s on their mind without having to create something and maintain something and, and pay a monthly bill or two, , in order to do so. , I think that’s not going to change because of this, , the fact that you set up, , a ghost blog and a and a ghost newsletter is testament to your commitment to this that not everybody has.
That’s fine. There are people who wanna be consumers of this, and I think it’s gonna make it easier for people to consume and easier for people who engage with comments, which is great. Now, how successful will ghost be with this, , you know, Substack for all of the issues that it has still has a first mover advantage?
, it’s. Referenced now routinely in the news. I mean, I’m watching a mainstream news broadcast and they’re saying this person in his [00:16:00] substack, , this is becoming as common as it used to be to hear that so and so tweeted something. , it’s becoming sort of the defacto place where people are sharing their perspectives that get picked up in the mainstream media.
Can, can ghost overcome this? , perhaps I, I don’t know. , nobody has really overcome some of the other organizations who have capitalized on that first mover advantage. Think of Amazon, for example, but we’ll see, , this move into the fedi verse may give them the momentum they need
Neville Hobson: possibly. , I think, , it is interesting.
You are, you are absolutely right. I, I, , in what you say. but. I see this as much more than just newsletter publishing. , for instance, I moved from WordPress where I’ve been for 18 years, , to ghost. I shut up shop on my WordPress blog, , with consequences from that, , SEO, the historical, , history built up, , with, with Google, search count, , console, et cetera.
All of that, [00:17:00] I start from scratch. But for my goals were different. I’m not interested so much in that. I was interested more in the writing. And the thing that is different with Ghost, in my view, , even compared to WordPress, which is a, which is a better comparison, WordPress is also enabled. The activity pub via plugin, but ghosts is a way easier to set up.
In fact, there is no setup. It all happens. You just enable the beater and boom, you’re there. WordPress, you ought to publish a plugin. In my case, one of the reasons why I shifted was my hosting service would not support the plugin, wasn’t WordPress, it was the hosting service, refused to enable it, , ‘cause they had something else going with a similar file name and so forth and so on.
So I thought, no, I’m outta here. I’m gone. , so there are other factors too, but that was a big one for me. But the major reason was simply the writing. I didn’t want to be a website admin anymore. I was a WordPress admin person more than I was a WordPress blogger. Fed up with it, didn’t want that anymore.
So I stopped the [00:18:00] old site’s still there as an archive. , but I’ve got a new site. The only difference with the domain name is now.io as opposed to.com. so, , that will appeal to many people. . It doesn’t yet support the activity pub on the self-hosted version of Ghost, because I could have done that.
I could have downloaded the software set up on a server just as you do with WordPress. I didn’t wanna do that anymore yet. I know two friends of mine are doing that. Well, you don’t have to do that word with WordPress
Shel Holtz: either, right? You could, you could set up on wordpress.com.
Neville Hobson: Yeah. But I had also had enough of the WordPress issues going on in WordPress with the CEO and his, , his legal fights with another, , reseller of WordPress, , hosting.
It was ugly and it also struck me, , that you’re constantly bombarded with upgrade to this. Hey, this new plugin is only $20 a month, all that daily. Literally enough. So I moved, , I don’t have any regrets, , after three weeks from the move, although I started the new presence back in January, so. In terms of where this is [00:19:00] going, , from a social web slash activity pub point of view, this is purely the beginning for Ghost.
, the Fedi verse has been there a while and Mastodon has been the big leader in that. I think now is the time for this sort of change to happen with another player making a firm commitment, which Ghost did quite a while ago. Now it’s public. The public beater is there. , they’ve had warm support from many of the obvious places.
The tech. Press, for instance, the likes of TechCrunch, verge, Vox, et cetera, all of those guys, , and a number of, , of their prominent, , influential voices who are set up shop on ghosts for both blog and newsletter. So I’m just, you know, one of the many individual users there. , I’ve had some great engagements via my new newsletter, which has been quite pleasing, more than I ever had with WordPress.
That’s no criticism of WordPress. They had a newsletter, but not to the same, , scale as how Ghost does it. So I think when the newsletter is supported in the activity, pug activity, [00:20:00] that’s when you’re gonna see. Bigger take up, I think from many of the big newsletter publishers, will that shift the needle in any form?
Right. It’s hard to tell. , I think the, , , reality as I see it certainly is that , from a communicator’s point of view, let’s say you are a, a communicator that in an organization looking at developments, , in this broad area, particularly with all the talk about, let’s look at blogging again, move away from these walled gardens.
Here’s another option you need to be considering. , it’s not too, it’s not. , much different to WordPress conceptually, , practically, it’s very different. WordPress has a huge ecosystem of hundreds, if not thou. In fact, it’s thousands of developers, plugin developers, theme developers. There’s theme marketplaces that work.
Ghost doesn’t have any of that, or very little of it. So there’s a lot more, , of, of the need for you to be hands-on, like in the very early days of blogging. , yep. You’re gonna have to write some h TM L. You’ve got JavaScript and CSS to get a handle on if you want, customize [00:21:00] stuff. If you don’t wanna do any of that stuff, there are resellers who will host it for you and take care of that.
In my case, I went to the hosted route to take care of the general installation of everything. I concentrate on the writing, and largely I’m doing that. I think this is an important move in terms of what is gonna happen with, , the fedi verse and, , enabling this idea, this appealing idea of wherever you are, , on a part of the Fedi verse that’s connected to everything else on the Fedi verse.
You can engage with content on a different service entirely, and guess what? Even blue sky. Is supported and that uses a different protocol to activity Pub. Now, that’s still, I think, an intent rather than an action because there’s a workaround you have to do, you’ve gotta follow somebody who’s developed a bridge to enable it.
And that’s not working too well at the moment, but I’m excited about that because of that brings blue sky in. There’s a barrier down immediately between tutor and protocols because it doesn’t really matter. You, the [00:22:00] average user won’t be bothered about, oh, it’s a d it’s at protocol and I’ve got activity.
But you don’t care about that. You shouldn’t even be thinking about that. You just write, publishing someone on Blue Sky leaves a comment on Blue Sky that shows up in your block. Reminds me very much of, , not the early days so much, , of the beginning to get developed. Days of WordPress. In particular, WordPress, , Shannon Whitley comes to mind immediately.
Mm-hmm. With his tweet chat plugin that enabled you to comment on Twitter. That would show up. In your WordPress blog post that you’ve commented on, and that was outstanding. An outstanding feature that all went away during the changes that went on, and, , a ton of other reasons. Now we’ve got something that has the promise to fulfill that intent, , in a way that you don’t have to do anything, , at all.
You as the you, as you as the blogger. , it would be great if once that’s connected to newsletters too, because then you’re gonna see all the barriers down in terms of engagement. And that should be of interest to [00:23:00] communicators in, in business B2B. This will come to the platform. , there are already a lot of businesses on Ghost already, , and some, .
There and others are experimenting. And that’s what I would advise community to take a look at Ghost with this thought in your mind that this is going to break down barriers across different platforms because of the fedi verse, whether it’s at protocol, whether it’s activity, pub, , work arounds, whatever.
It enables you to do things and enables others to connect with you. So I’m pretty excited about what’s coming.
Shel Holtz: Yeah, I have an email newsletter for the company I am employed by, and it goes out once a month. We use MailChimp to Yeah, create and distribute, manage the subscriptions and the like. And I have been thinking about changing to, frankly, Substack, , just to get that cash littles on Substack, I
Neville Hobson: hear.
Shel Holtz: Yeah. Well, it’s the cachet of the name because you’re now hearing it in the media. You, you, you’re now hearing it on podcasts, [00:24:00] people referencing, oh, on this person’s substack on that person’s, they don’t even say newsletter, they say Substack. On the other hand, , transitioning to Ghost would give us the ability to build a broader readership through.
The integration with the Fedi verse. , on the other hand, you have to wonder how many people hear Ghost and go, well, what kind of rinky dink outfit is this? , for people who haven’t heard of it , and don’t know what it is. , just that reputation , and it’s not, the substack doesn’t have some reputational challenges that they’re facing, as we have mentioned.
Seriously here, there are people who have, have left over some of this, but, but still, yeah, I would have to stop and think about what’s best for my organization. Sure. , if I were gonna make that transition.
Neville Hobson: I, I would say I have a simple view. Shell, frankly, and it’s easy for me as an independent person. I don’t work for a company.
I don’t have big organizational issues to consider, but I look at that the same as I would look at XI definitely would not wanna be in [00:25:00] a toxic place like that. Now, I’m not saying to sub sex toxic, I don’t know that. I do know though a number or a handful, let’s say, including a couple of prominent ones who have left Substack and have joined Ghost because they do not wanna be in a place that has, as I mentioned, the N word, , a number of people, , allegedly, , find, , tuned into that kind of, of thinking.
So, , I think your point is valid, though. It’s got. Name recognition right now, but hey, listen, everyone had that issue when they started out and time will tell whether they’ve got traction. I believe Ghost has serious traction. They’ve got, , a good presence. They’ve got a, a, a nonprofit foundation behind them.
They’ve got money, they’ve got support, and they are approaching it absolutely the right way. , unlike WordPress for instance, which I think about quite a bit still. So I think. The newsletter is, , important. , it’s definitely comparable to Substack. It’s not comparable to MailChimp or any of those other ones.
It was a newsletter only via email. [00:26:00] This is newsletter and web via a publishing mechanism on the, on the server that you host your blog on. It’s all takes care taken care of in the background. It is very much a social web approach to it all, and this then enables this, , beta service.
, it’s, , I think as I mentioned, , maybe I should restate. It’s a very early beta, the stuff not enabled yet, so I think you should test it out. , test out Substack too, if you have time. , it’s
Shel Holtz: interesting. I don’t know if either of them have corporate clients. I mean, , they very well may, but it’s not something I’ve, well, it depends how
Neville Hobson: you’re defining corporate clients.
I mean, there’s a number of public listed companies on there. There’s a handful of big media properties using Ghost as there are on Substack. So, you know, take a pick.
Shel Holtz: Well, let’s move along and talk about jobs because people leave them, , they leave them for all kinds of reasons, but the one we hear about most is that people don’t quit their jobs, they quit their bosses.
We may need to put a new spin on that. [00:27:00] According to a recent survey from the Grossman Group, people may actually be quitting because the company doesn’t communicate well. The survey found 61% of employees who say they’re unlikely to stay in their current jobs. Cite poor communication is one of the top reasons why that’s not a marginal number.
That’s the majority of employees who are at risk of walking out the door, pointing directly at communication breakdowns, and it’s not the first time we’ve heard this. Alert Media’s 2025 Workplace Survey Report finds that employees are craving more consistent, clear communication, especially when it comes to their safety and wellbeing.
One of the standout findings from the report. Psychological safety depends heavily on good communication, and when that’s lacking, trust falls apart. We’re not just talking about the usual day-to-day work cranked out by professional communicators. You know, HR emails, articles on the internet weekly newsletter.
What employees are flagging isn’t always about [00:28:00] channels or campaigns. It’s about day-to-day interactions. It’s about the way their leaders talk to their teams. It’s how transparently companies share bad news. It’s whether employees feel listened to and included in the loop. These are all things that internal communicators should be focused on if the company has an internal communications function at all.
In the Grossman Group research, a full 70% of respondents said that when communication is poor, it negatively impacts their productivity. Close to the same number. 69% say it drags down morale. That’s a direct line to disengagement, quiet, quitting, and ultimately attrition the cost. Well, Gallup estimates that low engagement, much of which stems from communication issues, costs the global economy $8.8 trillion.
That’s trillion with a T. Now, there’s a wrinkle. In the Grossman survey results, it found that employees overwhelmingly believe communication is [00:29:00] everyone’s responsibility. Yet they also made it clear that their number one ask is for better communication from wait for it, their direct managers. In fact, that was the top request, even more than hearing from the CEO or the leadership team.
So maybe employees do leave their managers, but specifically the managers who can’t or won’t communicate effectively. Now, another thread worth pulling comes from a recent CNBC piece highlighting what they call a vibe shift around layoffs. For years, companies could lay off workers with a boilerplate statement about market conditions, and that was that.
Now employees and the broader public are demanding transparency. That is, they want better communication. They wanna know why certain people were cut, how the decisions were made, and what leadership is doing to support those who are impacted. Anything less feels disingenuous and fuels a toxic narrative inside and outside the organization.[00:30:00]
Now, I find it disheartening that companies are still doing this. I I, I communicated all of this kind of information during layoffs going back to the 1980s. What can internal communicators do about the situation today? First, we can stop thinking of our job as just publishing information. I know I harp on this a lot, but I still see a lot of communication departments, that’s what they do.
Professional communicators should be training, coaching, and empowering people, managers to communicate better, especially in high stakes, high emotion moments. Think layoffs, reorgs, workplace safety in incidents, this is where trust is either built or broken. Second, we need to listen more and help others listen better.
Employees wanna feel heard. That means internal comms teams should be building better feedback loops, making space for upward communication and encouraging open dialogue between teams and their leaders. I’m reading a book right now called Leading the Listening [00:31:00] Organization just so I can figure out how to better do that.
Third, we can help shape the culture of communication by modeling clarity, empathy, and transparency in everything we produce. Interestingly, even in companies where morale is high, , consider North Carolina State University, where a recent survey showed strong pride among the staff, there are still gaps.
Fewer than half of the employees at NC State said they felt fully informed about leadership decisions. Pride and positivity don’t eliminate the need for better communications. If anything, they underscore the importance of maintaining that trust through consistent, honest communication. We’re in a moment where communication isn’t just a soft skill, it’s a retention strategy, it’s a risk mitigator, and for internal communicators, it’s an opportunity to step up, not just as messengers, but as the strategic enablers of better leadership at every level of the organization.
Neville Hobson: [00:32:00] It makes a lot of sense, I think. , this is something we talk about frequently, isn’t it? And here we are again with, with this about managers about better, better, better naming them to communicate, et cetera. I just wonder why it doesn’t happen. I mean, you’ve seen that
Shel Holtz: interesting
Neville Hobson: because
Shel Holtz: the survey indicates that for all the years we’ve been talking about this, the needle doesn’t seem to have moved.
Neville Hobson: It doesn’t, and I’m, I’m also thinking about Edelman’s trust barometer, this, this area features in there and in terms of general lack of trust, but you threw out a lot of metrics in that, in that narrative there. Shell, so let me ask you if, what would you say are the top three things communicators need to do about this if it’s enabling managers to be effective communicators themselves?
What do communicators need to do specifically?
Shel Holtz: Well, communicators, first of all, need to get the buy-in from their leaders. That what they are there for is not just to inform employees of what’s going on. This is more than corporate journalism. This is a department. [00:33:00] Whose expertise is to improve communication throughout the organization, and that means all kinds of communication.
How many communicators out there are partnered with their training departments, you know, learning and development? How many of them are working with managers around communication issues that they’re facing, either in their teams or in dealing with other teams? This is what we should be doing. We should be facilitating the flow of information and knowledge and helping managers communicate effectively two way with the members of their teams, , at all levels, , of the organization, frontline managers , , and senior leaders.
, we, we really need to help organizations become effective at communication at all levels, not just on the intranet and across email. Hmm. So that’s the big thing.
Neville Hobson: Okay, so, , how do we then avoid [00:34:00] having this conversation again in six months? Then what do you say? What do you say to that?
Shel Holtz: I don’t think there’s any way we can avoid having this conversation in six months. , I, I think that there are, , organizations that are led by people who think that communication should be writing nice stories about, , the wonderful things that are happening in the organization that nobody’s going to read.
, and that’s great. , that, that that’s all we need. , you know, we talk about how the internal comm star rose during the pandemic because companies had to lean on communicators when everybody was working from home and we. Weren’t accustomed to reaching people and engaging people that way. Well, it’s been five years and that star is falling again, I’m afraid.
, and I think it’s incumbent upon us as the communicators to make the case that what we do really is about retention and risk mitigation, and [00:35:00] building engagement and improving productivity. , and we just have to connect those dots for the, for the leaders of the organization so that they can take advantage of what communication brings to the table.
Neville Hobson: A call to action for internal communicators. I hear there, shell, that’s a, that’s a good one. So, , let’s go back to something we haven’t really talked about yet in this episode. Ai, we knew it was coming. It was coming. This is a interesting, to me, one of the more interesting developments, , recently and how traditional media is experimenting with ai.
And this comes from the British newspaper, the Independent, which, , has announced the launch of a new AI powered news service called Bulletin. Designed specifically for what they describe as time poor audiences. The idea is simple but compelling. Use artificial intelligence specifically Google’s Gemini AI model to generate ultra brief news summaries each no [00:36:00] longer than 140 words.
These summaries are created by rewriting original reporting from the independent, or content from news agencies. The key point though, is that journalists review and check every single summary before it goes live. They’ve hired a dedicated team of seven staff to support bulletin, and the goal is to offer readers a fast, accurate briefing service while maintaining journalistic integrity.
It’s part of the independence, broader strategy to make its journalism more accessible to busy readers. Those they say who are juggling long work hours, family responsibilities, or are just overwhelmed by information overload. Bulletin will launch at the end of March on bulletin news with initial sponsorship from the social platform.
We are eight, , that includes investor and former English Premier League footballer Ferdinand among its backers. As part of that partnership, the independent will produce exclusive content for we are eight as well. What makes the Bulletin particularly interesting, [00:37:00] I think, is how the publisher is positioning this effort.
Christian Broughton, the Independence managing director, said the journalists themselves were closely involved in shaping the AI workflow, ensuring they remain in control of the content editor-in-Chief Jordy. Greg describes Bulletin as brilliant shorthand for the independence journalism, a supplement, not a replacement for the deeper Coverage newsletters, podcasts and documentaries.
And of course, the independence move isn’t happening in isolation as other UK publishers like Newsquest and Reach are also experimenting with AI assisted reporting. Others in the US and elsewhere are also experimenting. Still, the independence in the UK seems intent on framing bulletin as a human led initiative supported by AI rather than the other way around.
So is this a new model for trusted, scalable journalism in an age of short attention spans and algorithmic overload? Or is it a step towards automating too much of what journalists do? [00:38:00] What do you think she,
Shel Holtz: well, it could be either one. Depends on how they go about it. It’s all in the execution. But you’re right, there is a lot of AI infiltrating the world of journalism these days.
And what I find most interesting about it is that it is uneven.
It,
Shel Holtz: there don’t seem to be trends. It all seems to be. Ideas that are generated internally and implemented so that you have different publications using AI for different things. And some of them could be really good for journalism, some of them not so much.
For example, the Los Angeles Times has introduced an AI driven labeling system to flag articles that take a stance or are written from a personal perspective. Their billionaire owner, , introduced this in a letter. , it’s called the Voices Label, and it applies to opinion pieces along with news, commentary, criticism, and reviews.
Some [00:39:00] articles also include AI generated insights, which summarize key points and present alternative viewpoints. , this is not. Making a lot of people happy. , Matt Hamilton, vice chair of the LA Times Guild said in a statement to the Hollywood Reporter, we don’t think this approach, AI generated analysis unvetted by editorial staff will do much to enhance trust in the media.
And earlier results have raised concerns. , the Guardian, , highlighted an LA Times opinion piece about AI generated historical documentaries where the AI tool claimed the article had a center left bias, and suggested that AI democratizes historical storytelling. Another flagged article covered California cities that elected Ku Klux Klan members in the 1920s.
The AI generated counterpoint stated that some historical accounts frame the Klan as a cultural response to societal change rather than a hate driven movement, which I suppose is not [00:40:00] necessarily an accurate but awkwardly positioned as an opposing view. , then you have, , El Folio, an Italian newspaper, , published.
In addition, entirely generated by ai. , the Associated Press has collaborated with Google to integrate realtime news updates into Google’s Gemini Chatbot Time Magazine. Introduced time ai, , platform that enhances journalism. Engagement using, , generative ai. It offers personalized and interactive storytelling experiences.
Reuters, , employees generative AI across various aspects of news production, including reporting, writing, editing, production, and publishing. But they do disclose when content is primarily or solely AI generated. ESPN began publishing AI generated recaps for women’s soccer games. , the Garden Island, , newspaper in Kauai, Hawaii introduced AI avatars named James and Rose to deliver live broadcasts by discussing [00:41:00] pre-written news articles.
, courts uses chat GPT to write hundreds of articles every day on securities and exchange filings. , and various news outlets are using AI for things like generating interview questions, predicting churn, transcribing interviews, suggesting headlines and proofreading. It is all over journalism and to.
Argue that is somehow inappropriate or unethical. , I think is, , the metaphor that we have used on this show more times than we probably should have is King Knut trying to hold back the tide. , it’s going to become a defacto part of journalism. And one of the reasons this makes sense is if you think about the budget cuts that especially print journalism has been experiencing, if they can get AI to pick up some of that drudgery load, , so that the reporters can focus on doing the reporting, you know, the, the shoe leather on the streets, , that’s to their benefit.
So yeah, I think you’re gonna see some [00:42:00] newspapers, , and other media outlets succeed with this. , they’re gonna find the right balance. They’re gonna keep the human exactly where they should be in the loop. , others, , like the LA Times, maybe not so much.
Neville Hobson: Yeah, , I, that’s how I was it too. I think, , given the information , I’ve found about what the independence planning to do and the key part of the role of journalists in the production of the content that is, generated with the help of the AI is absolutely crucial to this.
, you mentioned courts. , I was reading a courts piece recently, and it was quite clear to me that this was not, this, no journalist has written this content, and I just wonder, again, I don’t know this, but I just wonder, do they have actual humans checking the stuff before it gets out? I’m assuming they would.
, therein lies, I think, interestingly with
Shel Holtz: courts, they, they temporarily shut it down because of inaccuracies and then brought it back, expanding it to publish longer articles with disclaimers about the potential AI related, , hallucinations that. You could read that. [00:43:00] Yeah.
Neville Hobson: But that you see that that’s not good enough.
, totally not because , you get that , with the raw prompt response from chat GPT at the bottom, every single one. You know, it may be inaccurate. You need to check it. What what you need to do is, , is to create content. And you might use the ai, , in the case of the independent to gather, , the stories that, it has been asked to do.
And, and assuming it’s prompted in the right way, if that’s how they’re going about it, to, , create the content that you, the human then can edit. And you are the subeditor if you like. , let’s call it the verifier, the checker, whatever. You’ve gotta do all that too. , and so you don’t actually have to write the story.
, which is again, a, a discussion topic that would take us down a huge avenue, huge road , if we wanna get about into that in this episode, which we don’t. That’s another day, I think. , but, but I think. You are right. It’s a tsunami that’s approaching, this is going to impact journalism and questionably.
So in good ways, certainly, and in not so good ways. [00:44:00] Certainly, , the not so good ways I, I suspect is likely to be self-inflicted from within the industry more than anything else, by those who see an easy way to, , replace people or to not have to worry about increasing budgets to do the things they wanna do.
They can employ an AI to do this. And, , part, I suspect partly the failure of those organizations are gonna be mixed because of the fact the human people, the humans who need to read the content, pay money for it, are not gonna do that. There’s likely also to be regularly pushback in, in significant numbers of countries so that they’ll be threatened all those ways.
, there will be protests no matter what. There will be people who think this is a very bad idea. Totally. And the bad idea, I, I think is definitely the case for those who do not. Go through the, the, the right process to do this, which the independent seems to be planned. I’m looking forward to seeing the first edition.
That website, they’ve, they’ve got bulletin news. I took a look at it, , just before we started recording and [00:45:00] all it gave me was a completely blank page. Nothing on the page at all. I looked at the page source and there was nothing there either. So I dunno what’s happening with that. Maybe it’s just not live yet.
Shel Holtz: Well, , it’s late in March if the out, but it’s not
Neville Hobson: the end of March. Well, indeed. But if the story’s out there, they, they, they would be wise, I would say, to prepare something saying coming soon or whatever it might be. So, , but I’m gonna keep a cosign it because I’m keen to see how they’re doing this.
I’m like every average Joe, I’m time poor like everyone else, but I’d put time into this just to see how it is. , I did ask Gemini myself, how can I do this, do something like this if I wanted to. , be a, , , kind of new summary publisher. And to make it easy, I said, you know, how would I produce a newsletter that summarizes everything I’ve published on my website in the preceding 30 days with little summaries of all of this?
And it told me quite clearly how I could do this. The only thing missing is the bit I’m keen on, is it automating it? I don’t wanna have to create a template and then [00:46:00] copy and paste. No, no, no. What’s what’s the point of that? I’m looking for something that would enable me to create something additional that I can then review and approve and publish.
, there are ways to do it, and there are third party tools you could do. The Zapier comes to mind, but there’s two manuals. So I look into it further, I think. But if the independent is doing this, therefore there is a means. It may be that it’s a cost and the specialists you need to bring on board, but I could see this coming, , in a big way.
, and here in the uk, , reach is a, a newspaper publisher that owns a number, a significant number of regional newspapers, as well as a number of the national tabloid dailies. And, uh, they’ve been employing. AI tools to create some of their reporting for quite a while. So when you read in my local newspaper down here in Somerset, for instance, about, you know, this restaurant in that town has just published a new menu with their summer offers of nice food and all that stuff.
It makes a story. , I , don’t know. And I’m if, if you are listing here, correct me if I got this wrong, but I bet you an [00:47:00] AI did that, not a journalist. So, , some of the writing also you get suspicious about the quality of the writing. So you make is this AI generated. So I think the more you can do this where their, their approach, it seems to me, , is very good.
AI is the assistant for the human. So these are human led initiative, assisted by ai, not the other way around. That’s the way to do it in my book.
Shel Holtz: Yeah. I’m untroubled by the notion of articles in the mainstream press that have been written by ai. If there are articles that don’t require great writing and the securities filings.
Articles is a great example that hits some government database that you’re monitoring. The basic facts are there. The model has been trained on tens of thousands of articles about securities filings, and if it can share the facts accurately, , somebody does a quick review to make sure it’s right, why not?
Does that need a Pulitzer Prize winning journalists to crank that article out? I [00:48:00] what’s important is the information be shared timely among people who are going to make investment decisions based on these types of things, not how well it was written. Have those reporters go out and do the writing on the stuff where it matters.
Some of this writing just needs to be good enough.
Neville Hobson: Yeah. Yeah, you could be right. I’m not saying I disagree with you. I, and I don’t necessarily think I fully agree with you, but I, I think the, to me it’s like, , you need to be sure that what you are reading, , or consuming, , , in a different way of looking at it, is authentic.
And that doesn’t mean the literal use of the word authentic, , is, is it what they say they do. , so if they’re using AI to, to help them, they need to disclose that somewhere. And yes, I know, I hear the arguments from people saying, no, you don’t need to do. Yes you do. We are not yet in a stage where you don’t need to help people understand that you are genuine, , and that you are approaching this the right way.
Because if you didn’t do this, that news that someone will find interest wouldn’t get reported. ‘cause you don’t have enough [00:49:00] journalists to do that. So that answers a big. Part of the question about how are we gonna, , ensure that we’re fulfilling a social purpose, , even though we’re a business, of course, but the purpose in society, to report on the news of interest in your niche, in your community, in your geography, whatever it might be.
When we don’t have enough journalists, we are stuck with cashflow problems and so forth, and we’re probably gonna close down. So that is one of the reasons why I remember reading this about Reach a year or so back, why they were doing this for local reporting and indeed sports reporting in particular.
So, , the thing about, , business results that you talked about where it’s just data that makes it easier for it to be, , reported on by an ai because it won’t necessarily have, here’s what x, Y, Z company did, and they reported the loss. It therefore means that for their market position going forward, X the human rights that bit, unless the AI’s.
The means to do that, which requires a human to be involved at that [00:50:00] stage. So that’s taking it down a slightly different avenue. It seems to be, again, this is a huge topic. Shell, , and I think it’s great to talk about it like this because there is no, , silver bullet answer. There’s no, this is the way you do this.
And there there are 15 other ways you could do it too. But I, broadly speaking, your point I agree with though is that, , there are things that, , are worthy of reporting in the media that don’t justify that Pulitzer Prize winning journalist to be doing it. , so in which case you’ve got a bot to do it.
Yeah, that makes sense. But the human, and it doesn’t have to be the pulitz surprise winner, , although why shouldn’t it be needs to revise it and authenticate it and verify the story. So the human must still be involved.
Shel Holtz: Yeah, I you need to have that copy editor role for sure. , but yeah, I don’t need authenticity, , for certain types of, you know, two paragraph.
Purely factual articles. I know I’ve mentioned this multiple times, but even before chat, GPT was released in [00:51:00] November, 2022, , there was Associated Press using I think writer or Jasper to crank out articles about high school baseball games. They had never had the reporting staff to go out and cover these games before, but the stats were recorded in some accessible database, and now you could just turn the AI loose, train it on baseball score stories and let it.
Scrape up the, the statistics from the game and write the story. , somebody edits it , and off it goes, who cares? I, it doesn’t need to be authentic. I need to know if my kid’s team won. , and you know, if, if it’s a question of are we gonna send reporters out to do this, or are we gonna send out to cover the government scandal, I’m gonna let the AI write the high school sports stories and send the reporter out to report on the government scandal.
That’s where the authenticity is required.
Neville Hobson: Yeah, disagree. Sorry, I, I I need the authenticity for everything, no [00:52:00] matter what it is. In fact, it’s, well, the thing is that too, before the AP only a two paragraph report, I wouldn’t read it anyway. ‘cause I want the meaning. I don’t just want the score, I want the meaning.
But before the
Shel Holtz: AP started doing this, they weren’t covering those games at all because the resources weren’t there to do it.
Neville Hobson: No, indeed. So the resource there is now to do it properly, in which case do it properly is, is what I would say. So yeah, the authenticity is important. Like I said at the beginning, not the literal meaning of the word authenticity.
So can I trust what the, what what I read in print, metaphorically speaking is, is the truth or is accurate or is factually correct? How do I know that? And
Shel Holtz: what’s going to damage your credibility is if enough of those articles turn out to be inaccurate. Which is why you still need somebody checking, but, and hence you
Neville Hobson: need the authenticity.
Exactly. Yeah.
Shel Holtz: But do you need somebody to go to the game? Take notes, sit in the press box and, and take notes during the game and file the game. It depends on the game game. Well, not, not a high school game for sure. Not a regular season game. High game. No. Depends. School game. It depends on,
Neville Hobson: on what the report’s gonna be.
If it’s a lot of, of analysis and [00:53:00] prediction and so forth that you, you’d expect. So I was looking at a report about, , just, just over, just over, over this weekend about the recent, , rugby championships in Europe, the Six Nations, and a terrific report I read on, , one of the news on the sports websites that was full of, I could tell the writer really knew this topic exceptionally well, but the start of writing this tone, all that stuff was engaging.
It was entertaining. That’s what I wanna read. Not a dry two paragraph. That’s simply this is what happened. And at the 46th minute this guy did that and they went ahead and they won the championship. No, I can get that anywhere. Get a blogger to give me that source. I want to read that. Breadth and depth of information.
Well, I, I guarantee therefore guarantee I, I would pay pay for that newspaper and I would subscribe to it.
Shel Holtz: I guarantee you the people who are interested in how the high school team did, will read any story versus reading no story. Uh, and, and that’s the option that these publications have right now.
Neville Hobson: There we go.
Such as the landscape. She,
Shel Holtz: you know, and if it’s a feature story, , by all means, [00:54:00] but if it’s really just, , there were nine innings and here’s what happened. , I, I honestly don’t care how that got written, as long as it’s accurate. Fair enough. And like I say, I think the issues will arise if enough of those end up being wrong.
, not, or just simply people need them
Neville Hobson: not worth your time reading. ‘cause it’s crap basically.
Shel Holtz: Well, again, if you care about the score of the game, that’ll be fine. As long as it’s good enough.
Neville Hobson: Okay. That’s a
Shel Holtz: good, good point. And we’ll move on because we have more ai. Exactly. We have more AI to discuss, , starting with a brief report from Dan York.
Dan York: Greeting she Neville And fr this is all around the world. It’s Daniel coming at you from the Vancouver British Columbia airport where I was planning to have a much, , longer time to give a report. But, , I didn’t. So the thing I will just say is I was spent the week in Bangkok, Thailand at the Internet Engineering Task force, meeting 1 22 about internet standards.
And there’s some interesting stuff going on this, , this time around. What’s happening with [00:55:00] just sort of the evolution of, of encryption and of protecting the web in so many different ways? There were a lot of, , interesting discussions. One thing to pay attention to is there’s some new work going on about AI preferences, which, if you’ve worked with websites for a while, you’ll know about the robots txt file that you use to go , and indicate that you want certain parts of your site, , blocked or not.
, in this case, it’s a new one, which will allow you to indicate whether you want certain parts of your site to be scraped by AI engines or not. , it’s a new bit of work. It’s called AI preferences. It’s something that’s happening, it’s emerging, it’s being standardized or it’s being developed.
Yet after that, it needs to then be implemented in browsers and things like that. So there’s a ways off to go, but it’s something to just, you know, there is work being made done to pay attention to this. Another big, , little area of work was, , some work around what’s happening with the World Summit on the Information Society or WSIs plus 20 review that’s happening this, this, , summer [00:56:00] in Geneva.
Well and on throughout the year. Something else to pay attention. If you look up WSIs plus 20 WSIS plus 20, you can read a bit about what’s going on this year as far as some of that. That’s all I’ve got time for today. I’m just gonna give a quick little report like this and send it off to you guys. , as always, you can find more in my audio writing at Dan York.
Me. Thanks for listening. Bye for now.
Shel Holtz: Thanks, Dan. Sorry to hear about your flight delays and I’m sorry it kept you from recording a full report, but, , did enjoy your discussion of AI preferences. The standard. We will have the link to the Internet Engineering Task force group that is working on that and, , very interested to see how that develops and whether there will be widespread.
Acceptance of it among the publishers of sites who would be affected by it. But let’s keep talking about ai, because the conversation around AI and the workplace is shifting and that’s happening [00:57:00] fast. We’re no longer just wondering if AI will impact our jobs. There’s a new question floating around. What happens to the perception of expertise when AI starts performing the tasks that we once, , relied on to prove our value?
That question is central to a recent Business
03/23/25 | 0 Comments | FIR #456: Does AI Put Communication Expertise At Risk?
Listening to Employees is Vital. Is AI-Powered Sentiment Analysis a Viable Approach?
A growing number of companies are offering internal communicators AI-powered advanced sentiment analysis tools to process real-time insights from chat platforms, emails, and discussion forums. These solution providers claim that analyzing employee conversations will help internal comms teams understand how employees respond to things in real time, just as sentiment analysis informs marketing and advertising decisions.
Listening is, of course, important.
The word “communication” comes from the Latin communicare, which means “to make common” or “to share.” True sharing, after all, is not a one-sided act. Arriving at a mutual understanding about something requires it to move both ways—from speaker to listener and back again. Communication is not just about telling. It is also about understanding. If one person speaks but the other doesn’t listen or understand, communication has not taken place. Nothing has been made common.
Hence, listening is a key part of any strategic communication effort. It applies to internal communication as much as to public relations, marketing, and advertising.
The Employee Voice
In internal communication, this is often called “the employee voice.” Engage for Success, the chartered UK organization that promotes employee engagement as a better way to work, lists employee voice as one of the four enablers of engagement. Engage for Success defines employee voice as the result of an organization that sees its people as central to the solution (not the problem). “Employees are involved, listened to, and invited to contribute their experience, expertise, and ideas.
“Employee voice exists where the organization has put mechanisms in place to enable it to have an ongoing conversation with its staff, in different ways, to ensure every voice is heard,” according to the Engage for Success website.
Internal communicators have used various mechanisms, from surveys and pulse checks to focus groups and suggestion boxes. Even sentiment analysis isn’t new. Intel began using sentiment analysis a decade ago to gauge workplace morale and identify and address employee concerns before they boiled over and led to increased turnover.
While sentiment analysis can be a powerful tool for understanding the employee voice, its use also raises important ethical, legal, and cultural considerations. Unlike customer sentiment analysis, where brands analyze public social media posts or survey responses, internal sentiment analysis involves monitoring private or semi-private conversations within an organization. Employees may not be aware that their messages are being analyzed, leading to concerns about privacy, trust, and surveillance.
Listening, Not Surveillance
Transparency is crucial. If you plan to use AI-driven sentiment analysis, you must openly disclose this practice to employees. Communicators should explain what data is being collected, how it will be used, and—just as importantly—what it won’t be used for. For example, if the goal is to identify widespread concerns about a policy change, employees should know that individual messages will not be flagged or attributed to them personally. Without clear boundaries, employees may feel they are being watched rather than listened to, which can erode trust and discourage open dialogue.
Even then, employees could become more circumspect about what they say, knowing their every word is being scrutinized, leading to less candor and diminished value from collaboration.
Another key consideration is data accuracy. AI models, while powerful, are not infallible. They may misinterpret sarcasm, cultural nuances, or industry-specific language, leading to inaccurate conclusions about employee sentiment. Internal communicators must validate AI-generated insights with qualitative research, such as focus groups or direct conversations, before taking action. This safeguards against misinterpretations that could lead to poor decision-making or unnecessary interventions.
Ultimately, sentiment analysis should be framed as a listening tool, not a surveillance mechanism. Communicators should emphasize that the goal is to enhance engagement and address concerns, not to monitor individuals or penalize dissent. Organizations can use sentiment analysis to strengthen internal communications while maintaining a culture of trust and transparency by setting clear expectations, communicating openly, and supplementing AI insights with human judgment.
Or better yet, don’t use it at all. There’s no telling how many employees won’t believe these tools won’t be used for nefarious purposes. Just because you can doesn’t mean you should.
02/28/25 | 0 Comments | Listening to Employees is Vital. Is AI-Powered Sentiment Analysis a Viable Approach?
06/27/25 | 0 Comments | Internal Communication is Failing