.png)
Toot or Boot: HR Edition
Welcome to Toot or Boot, where a rotating crew of forward-thinking HR professionals dive into the latest news and trends shaping the workplace. We’re passionate about finding modern solutions and advocate for transforming the world of work into a space that’s fairer, more inclusive, and supportive for all. Join us as we challenge the status quo, spark meaningful conversations, and explore innovative ways to create a better future for employees and organizations alike.
Toot or Boot: HR Edition
Meta lays off "low performers," AI's guardrails in Oz, and DOGE spyware
In this episode, we dive into layoffs due to performance, AI and workplace monitoring. First, we chat Meta's controversial "low performer" layoffs, where terminated employees are challenging the company's performance-based justification. And we talk about the shift in messaging from previous layoffs that were described as cost-cutting measures. We also explore Australia's parliamentary report calling for mandatory AI guardrails in workplace decision-making, and discuss the troubling allegations of employee surveillance in federal workplaces. Join us as we unpack the complex intersection of technology, privacy, and worker rights in today's evolving workplace environment.
Connect with Matt:
On LinkedIn: https://www.linkedin.com/in/matthewmcfarlane/
Or here: https://linktr.ee/fndnseries
Articles:
Meta’s ‘low performer’ layoffs disputed by fired staffers and criticized by experts
Lawmakers call for mandatory AI guardrails to prevent private sector Robodebt
Federal Workers say DOGE put spyware on their work PCs is that ever ok in the workplace
Stacey Nordwall (00:00):
Welcome to Toot or Boot, where each week we talk about news related to HR and the world of work. We toot the news we like and boot the news we don't like. I'm your host, Stacey Nordwall, a serial joiner of early stage tech companies as their first in or only HR person. And joining us today we have Matt McFarlane. Hi Matt.
Matt McFarlane (00:20):
Hey, good to see you. Great to be here.
Stacey Nordwall (00:22):
It's great to see you again. Matt and I met a couple of years back at Transform, and it's been great to watch you kind of go off on your own and build this very big presence on LinkedIn talking about compensation so much. Really excited to have you here. I did a little bit of an intro, but can you tell folks who don't know you a little bit about yourself?
Matt McFarlane (00:43):
Absolutely. So as you said, my name is Matt. I'm the director of Foundation and the foundation is a company that helps startups build compensation practices that are clear, fair, and competitive. So not a path that I think early on in my career many people would've opted for, but compensation is something that really lights my fire. So really enjoy helping companies feel like they're getting those practices in place.
Stacey Nordwall (01:09):
And for anyone who isn't following you, I'm going to recommend right off the top that they follow you because posting a lot of good stuff about compensation and I think it's something in hr, it's something you have to know and do and it's so complex. So it's helpful to have this continuing series that you've been doing about compensation.
Matt McFarlane (01:32):
I appreciate that.
Stacey Nordwall (01:33):
Alright, so I want us to go ahead and dive in. We're going to start by talking about something that has been making the rounds quite a bit lately. Meta has been in the news just a ton, but we're talking specifically about their layoffs. The title of the article from Fortune is Meta's Low Performer Layoffs Disputed by Fired Staffers and Criticized by Experts. The recap is that meta terminated 3,600 employees saying that these were performance-based cuts. Some of the employees who were dismissed ultimately went on LinkedIn and other platforms to dispute that they were low performers and they talked about receiving evaluations that they met or exceeded expectations. The author noted that Microsoft also intends to target low performers in a round of cuts, and they mentioned, and I thought this was really interesting, that this is a notable shift in tone from previous rounds of layoffs that companies have been doing, that these have been messaged more as targeting low performers as opposed to cost cutting measures. So I want to start off, what did you think about this and what did you think? Is this a game of comms strategy at this point?
Matt McFarlane (02:48):
I think so. I think for me reading it, I don't think anyone would begrudge a company that has to do layoffs for a valid reason. I think that's the clear distinction for me. It's very much how they've gone about it. I mean, you've got these people here who have been told they're being laid off, which is typically a term used to define or describe an environment where business has maybe dried up or the company's not doing as well. And so we need to have a reduction in force and layoff is typically the term that we're going to use. Then they've gone and said, well, actually it's because these people are a low performer, which is it? Is it the business circumstances? Is it because these people aren't performing? So I think they really haven't been clear, and I think it's quite apparent that, yeah, that's kind of throwing this weird sort of media maelstrom out there to confuse, and I don't know what their game is, but I think the bigger thing for me, is this just a sort of one and done type thing, or is this a longer term ongoing impact that they're going to be continuously having these sorts of reviews and layoffs based on the bottom five or 10% or whatever it is, and what's that doing for the culture at Meta who arguably have this brand of incredible high performers?
(04:07):
Is it going to turn people into becoming really competitive with one another in age to sort of avoid that bottom rung that's going to be let go? So yeah, it presents a whole lot of questions for me around what it's doing in that sense.
Stacey Nordwall (04:21):
Yeah, that is a really good point. And yeah, for me, I thought it was really interesting that they said that the messaging, this is a messaging shift. I don't know that I necessarily would've picked up on that otherwise, because in my head I'm immediately thinking about, okay, why are they doing this? Why are they doing it in this way? Are they doing it and framing it as low performance so that they can avoid having to do the notifications that they would normally have to do otherwise? Do they think that because of everything going on with the kind of attack on DEI within the US, that they can do this because people really aren't going to be able to legally take action or they feel like that risk is less. So immediately my head was kind of going in that direction and thinking also about the people who are now maybe out in a job seeking environment that is not very friendly and carrying this label of low performer.
Matt McFarlane (05:25):
Yeah, I actually heard, or I read this conspiracy theory and I'm all about HR conspiracy theories, but there was this one theory that they've done this to kind of label or tarnish these people so that no one else will hire them. Because I think within the article that was clear that there were a bunch of people that certainly didn't feel that they were poor performers that had come off performance reviews where they had had really high remarks, exceeds expectations, et cetera. And so I cast my mind back to when tech was in this space where companies were hiring people just so that other companies couldn't have them. And I wonder if it's an extension of that where they're like, well, we don't want you anymore, but we don't want anyone else to have you, so we're going to kind of label these everybody as a poor performer, even though it's got nothing to do with it. And unfortunately it's there now ex-employees who have to bear the brunt of that in what is an incredibly difficult employment market at the moment. So yeah, I don't know if there's any truth to that. Obviously it's hard to know what's happening behind the curtain at Meta, but I found that a particularly interesting theory.
Stacey Nordwall (06:37):
Yeah. Oh yeah, that is really interesting because I think, I don't know if they said it within the article or if it was something that I was reading elsewhere, but the idea that you can call people low performers because it gives the appearance of they have been objectively evaluated, but the reality is, and folks who do performance work a lot know that it's not objective. There is a lot of, it can be quite vague. So to your point, they're putting this vague label on people and sending them out and who knows I who, what is behind this thought process, why they're doing it this way. It seems like they have, this isn't their first round of layoffs, they've been continually doing it, so are they just trying to message it in a different way? Yeah, there's a lot of unknown here, but it still doesn't feel like they're doing things in a good way.
Matt McFarlane (07:42):
No, I think unfortunately, they seem to be or appear to be taking an approach that is very effective for them commercially and for shareholders and things like that. And no doubt their share price, but I think long-term, the impact it's going to have on the businesses is probably not going to be a good one. But we'll see. Suffice to say it's nothing good about it from my perspective.
Stacey Nordwall (08:04):
Yeah, indeed. All right. I want to move on to this next one. This is one that you brought to my attention. It's from smart company Lawmakers call for mandatory AI guardrails to prevent private sector robo debt. And the recap here is that a report from the Australian parliament came to the conclusion that AI systems that help employers with recruitment, payroll, and staff training should be treated as high risk and face mandatory guardrails, and that employers whose AI power decisions negatively impact their workforce should be held liable. They found that while AI and these automated decision-making systems could have major productivity benefits, they could also have some real risks for workers subjected to those systems. And yeah, I'm really fascinated because I feel like I love getting the insight into what's happening in other countries, especially in things like this because I feel like we're nowhere near close to having anything like this within the us. So what did you think about this?
Matt McFarlane (09:12):
Yeah, I really liked this for a bunch of raisins, and so I really like this for a bunch of reasons. And so I think an extra little bit of context there is that we had this scandal in Australia a few years ago where the government had a scheme called robo debt, which was dubbed robo debt, where essentially there was an automated process that was essentially trying to collect on debts that Australians had to the government, but it about, or it was delivered in such a way that it actually caused people to feel like they were about to be jailed or about to have all of these terrible things. The scams where it's like you get a scam message from the IRA or something like that, or IRS, sorry, and it sort of puts the fear of God in you that something's going to happen and it tries to get you to buy a gift card or whatever.
(09:56):
And essentially this was a very similar approach, but it was by the Australian government or initiative by them. And so it caused terrible, terrible outcomes, including people who chose to take their life rather than face some of these outcomes. And so I think for me, seeing this article is a really positive thing for a couple of reasons. One is that we've seen the very worst things that can be caused by automation left to run wild. The second is that I think I see a lot of self punishment by the people profession about what might be deemed a slow adoption of ai. And I think a lot of people I speak to are always looking for a very quick fix. They're looking for solutions, they're looking for ways in which AI can be implemented today. And I think this article for me gives the profession a bit of breathing room and gives people permission not to take this, to refer back to meta, but not to take this move fast and break things approach with AI because it can be so detrimental if you get it wrong.
(10:59):
And so I really like that this work is being done by the government and hopefully it helps pave the way for other countries, but it just sort of puts the onus not only on vendors who are making all of these promises, but actually the companies that are implementing it and saying, if you are going to put something in place that uses AI and it has an adverse impact on your people and on the tax paying citizens of Australia, you will be held liable if it's found to do it in the wrong way. And I think it gives us just a bit of credence to say, Hey, let's make sure that we're really understanding the problem we're trying to solve and that the solution we're putting in place is going to fix it rather than just kind of knee jerking and putting something in because it says AI and we feel like we have to have AI.
Stacey Nordwall (11:41):
Yeah, I love that you said that because I think that's so spot on from what I've heard from people, especially within tech, that there's this great, great pressure that it has to be included, and HR people are feeling that pressure of we have to find out some way to use ai, particularly in positions where they are so understaffed already and they're drowning and trying to figure out some way to relieve that pressure. And really, we don't know that it is safe in some ways, or we recognize that there are, or we need to recognize that there are some things where this can really go wrong and have adverse impacts and be able to have that breathing room. And I think one of the favorite statements in this article for me, they said it is not enough to rely on technology developers and deployers to mitigate the risks of technologies like AI and ADM.
(12:45):
And they said prioritizing innovation at the expense of people's right to participate in a safe, ethical, and fair digital workplace is overestimating the proposed benefits and underestimating the immense risks. And I feel like people, particularly HR people, to your point, they need to hear this, they need to have that breathing room and to feel like it's okay to not just run forward and to really be more mindful and thoughtful about it because it really can adversely impact people. And I think of all the times that I've seen Sam Altman kind of say like, oh, well, yeah, we will be happy to regulate by whatever legislation is put into place. And it basically is abdicating responsibility that when you tell us to do a thing, we'll do the thing. But until then we're not going to. So I think, yeah, I can see. Go ahead.
Matt McFarlane (13:42):
Oh, no, yeah, I'm nodding vigorously. I was just going to say, I was at South by Southwest in Sydney at the end of last year, and I feel like Meta just keeps coming back into play. But there was somebody from the AI lab or something from Meta, so works on, I think it's Lambda is theirs, but I was struck by their approach to a question from the audience, which was around AI regulation and around AI guardrails. And the response was very much just like, oh, it's a shared responsibility. And I thought, gosh, you guys haven't learned anything from social media, have you? Facebook and Instagram is so detrimental to so many people in so many different ways. And it's like
(14:22):
They take this kind of platform approach where it's like, Hey, we're all in this together. And it's like, okay, we're past that now. And I think there needs to be a lot more ownership and responsibility taken on by these vendors, not just Meta, but yeah, you are, right, open AI, et cetera. So yeah, long way to go. I think this is a good article to hopefully start imposing some expectations and constraints not only on the vendors, but again, giving us as people, professionals, a bit of breathing room to say, Hey, we need to make sure we get this right because it's the company that will pay for it if we don't.
Stacey Nordwall (14:54):
And I'm hopeful, I would say, because I don't believe that that pressure is going to come from the US, it makes me, and that was stressing me out honestly. But to see that that pressure might be pushed on these vendors from other countries, that they'll be the ones to put in those regulations and hopefully make vendors more safe for others to use, that does fill me with some amount of hope. I definitely tooted this. It felt like a relief to even know that this conversation was being had and to the point where they're actually talking about enacting these guardrails. Yeah, that made me really happy.
Matt McFarlane (15:42):
Well, Australia does love a rule, so I'm more than excited to see us lead the way on this like we did with the social media ban on kids under 16 as well. So hopefully it gives some, I don't know, some way for other countries like the US to adopt similar practices. Happy to be the Guinea pigs.
Stacey Nordwall (16:02):
Well, yeah. Well, thank you for it. And then this is almost on the flip side of that. In a way, we have an article from Inc. Federal workers say, doge put spyware on their work PCs. Is that ever okay, in the workplace? The recap is that the authors report that there are allegations from federal workers claiming that they've found code installed on their official computers that could be used to spy on them, including messages that they're sending to team members. The assumption is that Doge is surveilling people looking for signs of progressive thinking or disloyalty to Trump. They go on to mention other examples of employee monitoring such as Slack and say that federal and most state privacy laws do not require that employers inform employees if they're being monitored. I'm interested, is that different in Australia? Do you see this and think like, oh, that wouldn't happen here, or is it kind of still a challenge there as well?
Matt McFarlane (17:06):
I wouldn't say it's a challenge. I know it differs on a state by state basis here in Australia. I mean, I think as a general position, other circumstances where surveillance is acceptable, I mean, sure, there's certainly environments where having done the proper notification, et cetera, it ultimately may be you're right as a business owner to do that for your people. But I think the real issue that I have with this, and this just screams like 1984, George Orwell, big brother type things. I mean, loyalty to Trump, things like that. It's crazy to read about in the news. But I think
(17:47):
For me, the thing that is always terribly wrong with surveillance is that it's done covertly and that people aren't aware that they're being watched or followed or listened to or any of those sorts of things. I mean, it may be had they gone about it and they said, look, we're going to install these things because we're concerned about an abusive ex, or people aren't doing their jobs, or, sure. I mean, it's a poor way to go about it, but it may be your right to be able to do that. At least you're upfront with people and they kind of have a chance to be aware of what's going on and kind of make sure that they act in accordance with those expectations. If that's what they choose to do, they may say, well, look, that's the straw that breaks the camel's back for me. I'm going to go somewhere else. So yeah, I don't think it's ever a nice thing to surveil your people. I can understand that some people feel like it might be a reasonable thing to do depending on the circumstances, but for me to do it covertly is always a massive trust breaker. I mean, trust is so hard to get with your people as it is. Why do this when I think it could be done in a more constructive way if that could even be said.
Stacey Nordwall (18:58):
Yeah, I mean, I think it was definitely a boot for me for the same reasons that you're saying that ultimately it doesn't speak to building trust at all. It doesn't speak to transparency, obviously. I think they mentioned some things of like, okay, well, you can monitor employees to spot who's time wasting or underperforming or to minimize distractions. And for me, I was thinking, how are you objectively doing that? What kind of monitoring are you doing that's actually definitively telling you those things? I feel like that's kind of a cover. It doesn't really seem like it could be accurate to me. And I think clearly they also mentioned, okay, this could be something that could provide evidence for litigation. And certainly if you have discrimination or harassment or things like that, being able to access communications and that kind of evidence is important. But I would also, in that case, presume that at least one person is saying, yes, please access these communications. So there's still that sense of some kind of what you're saying, clarity, communication around what's happening or what it's being used for, what the expectations are. So without any of that, it is very 1984 vibes. It's not
Matt McFarlane (20:26):
Good. No. And it's like to the article's point about, oh, you can do this with federal workers. And it's like just because you can doesn't mean you should. And I guess, again, you think about the kind of relationship you want to have with your people, and I don't know why you would want to have these. But then I also think of the nature of this kind of inquisition that's happening at the moment with Doge, where every week it's a new department and they kind of move at lightning speed. They kind of irresponsibly by the sounds of it, step in without any proper authorities. They start taking over systems and things like that. And I think where this has propensity to go wrong, to your point around biases and how can you be objective in a search, this is that we've already seen them get it wrong because they are being so fast and loose with the rules. There was the thing about them discovering aid that went to Gaza, thinking it was the Gaza Strip when it actually was Gaza, somewhere in Africa, I think, or continental Africa. So they're not doing their proper due diligence. They're just looking at surface level things and making decisions and tweeting about it and doing all terrible things to recruit people to their cause. But again, it's fundamentally just undermining government and important institutions in my mind that are just going to have these incredibly long-term detrimental impacts to society.
Stacey Nordwall (21:49):
Yeah,
Matt McFarlane (21:51):
Sorry. Come to Australia, come to us. I mean, things aren't much better here in some respects. We've got one opposition member who seems to think that following Trump's footsteps is great. But anyway. Oh
Stacey Nordwall (22:05):
Gosh. Yeah. I mean, I am taking a breath like, oh God, it's such a bummer. And it's true though, right? When you do things in this, this way, what they're talking about, it's hard to know because it's not done in a thoughtful way that the impacts are not going to be good. And I think that's part of the broader thing is when you do anything in the workforce and you're not doing it in a thoughtful way, the results are unlikely to be good.
Matt McFarlane (22:42):
Yeah. Yeah, exactly. And I think the sad fact is that they don't care about the people that are there, and they don't care about building trust with them because they won't be there next week. They're not going to be there beyond a certain period. They just want something that they can market to the US population as being some achievements, whether it's valid or not. And then they want to move on. And I think the whole thing is a big marketing exercise, and unfortunately, the government employees are paying the price for it.
Stacey Nordwall (23:08):
Yeah, indeed. All right. Well, on that note, that is going to be the last article that we're chatting about today, Matt, it was so wonderful to have you on. If someone wants to follow you, connect with you, learn more from you, how can they do that?
Matt McFarlane (23:25):
Yeah, absolutely. It was great being here. Look, I spend a lot of time on LinkedIn, so definitely follow me out to me there. I also have my own newsletter called the Foundation Series, where we explore startup compensation with other experts. Might be heads of people or founders, or experts in the startup space. So yeah, a bit more of a deep dive on my usual LinkedIn content there if you're interested.
Stacey Nordwall (23:47):
All right. Thank you so much.
Matt McFarlane (23:48):
Great to be here.