Cloud Computing solutions, including Software, Infrastructure, Platform, Unified Communications, Mobile, and Content as a Service are well-established and growing. The evolution of these markets will be driven by the complex interaction of all participants, beginning with end customers.
Edge Strategies has conducted over 80,000 interviews in behalf of our clients in both mature and emerging markets with decision-makers across the full cloud ecosystem- including Vendors, Service Provider and End Customer organizations.
Typical projects include:
We provide current, actionable insight into business decision processes across market segments, from SMBs to Large Enterprises. Our work leverages a deep understanding of the business models of key Cloud Ecosystem participants including:
Our experience allows us to get up to speed quickly on new projects. We are experts in designing and conducting quantitative and qualitative research. Based on our focused findings, we work with our clients to make the decisions necessary to gain early success in a variety of markets, including SaaS, IaaS, PaaS, UCaaS, and mobile/device services.
The European Union (EU) AI Act may seem like a done deal, but stakeholders are still drafting the code of practice that will lay out rules for general-purpose AI (GPAI) models, including those with systemic risk. Now, though, as that drafting process approaches its deadline, US President Donald Trump is reportedly pressuring European regulators to scrap the rulebook. The US administration and other critics claim that it stifles innovation, is burdensome, and extends the bounds of the AI law, essentially creating new, unnecessary rules. The US government’s Mission to the EU recently reached out to the European Commission and several European governments to oppose its adoption in its current form, Bloomberg reports. “Big tech, and now government officials, argue that the draft AI rulebook layers on extra obligations, including third party model testing and full training data disclosure, that go beyond what is in the legally binding AI Act’s text, and furthermore, would be very challenging to implement at scale,” explained Thomas Randall, director of AI market research at Info-Tech Research Group. Onus is shifting from vendor to enterprise On its web page describing the initiative, the European Commission said, “the code should represent a central tool for providers to demonstrate compliance with the AI Act, incorporating state-of-the-art practices.” The code is voluntary, but the goal is to help providers prepare to satisfy the EU AI Act’s regulations around transparency, copyright, and risk mitigation. It is being drafted by a diverse group of general-purpose AI model providers, industry organizations, copyright holders, civil society representatives, members of academia, and independent experts, overseen by the European AI Office. The deadline for its completion is the end of April. The final version is set to be presented to EU representatives for approval in May, and will go into effect in August, one year after the AI Act came into force. It will have teeth; Randall pointed out that non-compliance could draw fines of up to 7% of global revenue, or heavier scrutiny by regulators, once it takes effect. But whether or not Brussels, the de facto capital of the EU, relaxes or enforces the current draft, the weight of ‘responsible AI’ is already shifting from vendors to the customer organizations deploying the technology, he noted. “Any organization conducting business in Europe needs to have its own AI risk playbooks, including privacy impact checks, provenance logs, or red-team testing, to avoid contractual, regulatory, and reputational damages,” Randall advised. He added that if Brussels did water down its AI code, it wouldn’t just be handing companies a free pass, “it would be handing over the steering wheel.” Clear, well-defined rules can at least mark where the guardrails sit, he noted. Strip those out, and every firm, from a garage startup to a global enterprise, will have to chart its own course on privacy, copyright, and model safety. While some will race ahead, others will likely have to tap the brakes because the liability would “sit squarely on their desks.” “Either way, CIOs need to treat responsible AI controls as core infrastructure, not a side project,” said Randall. A lighter touch regulatory landscape If other countries were to follow the current US administration’s approach to AI legislation, the result would likely be a lighter touch regulatory landscape with reduced federal oversight, noted Bill Wong, AI research fellow at Info-Tech Research Group. He pointed out that in January, the US administration issued Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence.” Right after that, the National Institute of Standards and Technology (NIST) updated its guidance for scientists working with the US Artificial Intelligence Safety Institute (AISI). Further, references to “AI safety,” “responsible AI,” and “AI fairness” were removed; instead, a new emphasis was placed on “reducing ideological bias to enable human flourishing and economic competitiveness.” Wong said: “In effect, the updated guidance appears to encourage partners to align with the executive order’s deregulatory stance.”
We’re all talking to fake people now, but most people don’t realize that interacting with AI is a subtle and powerful skill that can and should be learned. The first step in developing this skill set is to acknowledge to yourself what kind of AI you’re talking to and why you’re talking to it. AI voice interfaces are powerful because our brains are hardwired for human speech. Even babies’ brains are tuned to voices before they can talk, picking up language patterns early on. This built-in conversational skill helped our ancestors survive and connect, making language one of our most essential and deeply rooted abilities. But that doesn’t mean we can’t think more clearly about how to talk when we speak to AI. After all, we already speak differently to other people in different situations. For example, we talk one way to our colleagues at work and a different way to our spouses. Yet people still talk to AI like it’s a person, which it’s not; like it can understand, which it cannot; and like it has feelings, pride, or the ability to take offense, which it doesn’t. The two main categories of talking AI It’s helpful to break the world of talking AI (both spoken and written) into two categories: Fantasy role playing, which we use for entertainment. Tools, which we use for some productive end, either to learn information or to get a service to do something useful for us. Let’s start with role-playing AI. AI for pretending You may have heard of a site and app called Status AI, which is often described as a social network where everyone else on the network is an AI agent. A better way to think about it is that it’s a fantasy role-playing game in which the user can pretend to be a popular online influencer. Status AI is a virtual world that simulates social media platforms. Launched as a digital playground, it lets people create online personas and join fan communities built around shared interests. It “feels” like a social network, but every interaction—likes, replies, even heated debates—comes from artificial intelligence programmed to act like real users, celebrities, or fictional characters. It’s a place to experiment, see how it feels to be someone else, and interact with digital versions of celebrities in ways that aren’t possible on real social media. The feedback is instant, the engagement is constant, and the experience, though fake, is basically a game rather than a social network. Another basket of role-playing AI comes from Meta, which has launched AI-powered accounts on Facebook, Instagram, and WhatsApp that let users interact with digital personas — some based on real celebrities like Tom Brady and Paris Hilton, others entirely fictional. These AI accounts are clearly labeled as such, but (thanks to AI) can chat, post, and respond like real people. Meta also offers tools for influencers to use AI agents to reply to fans and manage posts, mimicking their style. These features are live in the US, with plans to expand, and are part of Meta’s push to automate and personalize social media. Because these tools aim to provide make-believe engagements, it’s reasonable for users to pretend like they’re interacting with real people. These Meta tools attempt to cash in on the wider and older phenomenon of virtual online influencers. These are digital characters created by companies or artists, but they have social media accounts and appear to post just like any influencer. The best-known example is Lil Miquela, launched in 2016 by the Los Angeles startup Brud, which has amassed 2.5 million Instagram followers. Another is Shudu, created in 2017 by British photographer Cameron-James Wilson, presented as the world’s first digital supermodel. These characters often partner with big brands. A post by one of the major virtual influencer accounts can get hundreds or thousands of likes and comments. The content of these comments ranges from admiration for their style and beauty to debates about their digital nature. Presumably, many people think they’re commenting to real people, but most probably engage with a role-playing mindset. By 2023, there were hundreds of these virtual influencers worldwide, including Imma from Japan and Noonoouri from Germany. They’re especially popular in fashion and beauty, but some, like FN Meka, have even released music. The trend is growing fast, with the global virtual influencer market estimated at over $4 billion by 2024. AI for knowledge and productivity We’re all familiar with LLM-based chatbots like ChatGPT, Gemini, Claude, Copilot, Meta AI, Mistral, and Perplexity. The public may be even more familiar with non-LLM assistants like Siri, Google Assistant, Alexa, Bixby, and Cortana, which have been around much longer. I’ve noticed that most people make two general mistakes when interacting with these chatbots or assistants. The first is that they interact with them as if they’re people (or role-playing bots). And the second is that they don’t use special tactics to get better answers. People often treat AI chatbots like humans, adding “please,” “thank you,” and even apologies. But the AI doesn’t care, remember, and is not significantly affected by these niceties. Some people even say “hi” or “how are you?” before asking their real questions. They also sometimes ask for permission, like “Can you tell me…” or “Would you mind…” which adds no value. Some even sign off with “goodbye” or “thanks for your help,” but the AI doesn’t notice or care. Politeness to AI wastes time — and money! A year ago, Wharton professor Ethan Mollick pointed out that people using “please” and “thank you” in AI prompts add extra tokens, which increases the compute power needed by the LLM chatbot companies. This concept resurfaced on April 16 of this year, when OpenAI CEO Sam Altman replied to another user on X, saying (perhaps exaggerating) that polite words in prompts have cost OpenAI “tens of millions of dollars.” “But wait a second, Mike,” you say. “I heard that saying ‘please’ to AI chatbots gets you better results.” And that’s true — sort of. Several studies and user experiments have found that AI chatbots can give more helpful, detailed answers when users phrase requests politely or add “please” and “thank you.” This happens because the AI models, trained on vast amounts of human conversation, tend to interpret polite language as a cue for more thoughtful responses. But prompt engineering experts say that clear, specific prompts — such as giving context or stating exactly what you want — consistently produce much better results than politeness. In other words, politeness is a tactic for people who aren’t very good at prompting AI chatbots. The best way to get top-quality answers from AI chatbots is to be specific and direct in your request. Always say exactly what you want, using clear details and context. Another powerful tactic is something called “role prompting” — tell the chatbot to act as a world-class expert, such as, “You are a leading cybersecurity analyst,” before asking a question about cybersecurity. This method, proven in studies like Sander Schulhoff’s 2025 review of over 1,500 prompt engineering papers, leads to more accurate and relevant answers because it tells the chatbot to favor content in the training data produced by experts, rather than just lumping the expert opinion in with the uneducated viewpoints. Also: Give background if it matters, like the audience or purpose. (And don’t forget to fact-check responses. AI chatbots often lie and hallucinate.) It’s time to up your AI chatbot game. Unless you’re into using AI for fantasy role playing, stop being polite. Instead, use prompt engineering best practices for better results. >We’re all talking to fakepeople now, but most people don’t realize that interacting with AI is a subtleand powerful skill that can and should be learned.The first step in developingthis skill set is to acknowledge to yourself what kind of AI you’re talking toand why you’re talking to it. AI voice interfaces arepowerful because our brains are hardwired for human speech. Even babies’ brainsare tuned to voices before they can talk, picking up language patterns earlyon. This built-in conversational skill helped our ancestors survive and connect,making language one of our most essential and deeply rooted abilities.But that doesn’t mean we can’tthink more clearly about how to talk when we speak to AI. After all, we alreadyspeak differently to other people in different situations. For example, we talkone way to our colleagues at work and a different way to our spouses. Yet people still talk to AIlike it’s a person, which it’s not; like it can understand, which it cannot;and like it has feelings, pride, or the ability to take offense, which itdoesn’t. The two main categories oftalking AIIt’s helpful to break theworld of talking AI (both spoken and written) into two categories: >1. Fantasy role playing, which we use forentertainment. >2. Tools, which we use for some productive end,either to learn information or to get a service to do something useful for us. Let’s start with role-playingAI. id="ai-for-pretending">AI for pretending>You may have heard of a siteand app called Status AI, which is oftendescribed as a social network where everyone else on the network is an AIagent. A better way to think about itis that it’s a fantasy role-playing game in which the user can pretend to be apopular online >influencer. Status AI is a virtual worldthat simulates social media platforms. Launched as a digital playground, itlets people create online personas and join fan communities built around sharedinterests. It “feels” like a social network, but every interaction—likes,replies, even heated debates—comes from artificial intelligence programmed toact like real users, celebrities, or fictional characters.It’s a place to experiment,see how it feels to be someone else, and interact with digital versions ofcelebrities in ways that aren’t possible on real social media. The feedback isinstant, the engagement is constant, and the experience, though fake, isbasically a game rather than a social network. Another basket of role-playingAI comes from Meta, which has launchedAI-powered accounts on Facebook, Instagram, and WhatsApp that let usersinteract with digital personas — some based on real celebrities like Tom Bradyand Paris Hilton, others entirely fictional. These AI accounts are clearlylabeled as such, but (thanks to AI) can chat, post, and respond like realpeople. Meta also offers tools for influencers to use AI agents to reply tofans and manage posts, mimicking their style. These features are live in theUS, with plans to expand, and are part of Meta’s push to automate andpersonalize social media.Because these tools aim toprovide make-believe engagements, it’s reasonable for users to pretend likethey’re interacting with real people. These Meta tools attempt tocash in on the wider and older phenomenon of virtual online influencers. Theseare digital characters created by companies or artists, but they have socialmedia accounts and appear to post just like any influencer. The best-knownexample is Lil Miquela, launched in 2016 by the Los Angeles startup Brud, whichhas amassed 2.5 million Instagram followers. Another is Shudu, created in 2017by British photographer Cameron-James Wilson, presented as the world’s firstdigital supermodel. These characters often partner with big brands. A post by one of the majorvirtual influencer accounts can get hundreds or thousands of likes andcomments. The content of these comments ranges from admiration for their styleand beauty to debates about their digital nature. Presumably, many people thinkthey’re commenting to real people, but most probably engage with a role-playingmindset. By 2023, there were hundredsof these virtual influencers worldwide, including Imma from Japan and Noonoourifrom Germany. They’re especially popular in fashion and beauty, but some, likeFN Meka, have even released music. The trend is growing fast, with the globalvirtual influencer market estimated at over $4 billion by 2024. id="ai-for-knowledge-and-productivity">AI for knowledge and productivity>We’re all familiar withLLM-based chatbots like ChatGPT, Gemini, Claude, Copilot, Meta AI, Mistral, andPerplexity. The public may be even morefamiliar with non-LLM assistants like Siri, Google Assistant, Alexa, Bixby, andCortana, which have been around much longer.I’ve noticed that most peoplemake two general mistakes when interacting with these chatbots or assistants.The first is that theyinteract with them as if they’re people (or role-playing bots). And the secondis that they don’t use special tactics to get better answers. People often treat AI chatbotslike humans, adding “please,” “thank you,” and evenapologies. But the AI doesn’t care, remember, and is not significantly affectedby these niceties. Some people even say “hi” or “how areyou?” before asking their real questions. They also sometimes ask forpermission, like “Can you tell me…” or “Would you mind…”which adds no value. Some even sign off with “goodbye” or“thanks for your help,” but the AI doesn’t notice or care. Politeness to AI wastes time —and money! A year ago, Wharton professor Ethan Mollick pointed out that peopleusing “please” and “thank you” in AI prompts add extratokens, which increases the compute power needed by the LLM chatbot companies.This concept resurfaced on April 16 of this year, when OpenAI CEO Sam Altmanreplied to anotheruser on X, confirming that polite words in prompts have cost OpenAI “tens of millions ofdollars.” “But wait a second,Mike,” you say. “I heard that saying ‘please’ to AI chatbots gets youbetter results.” And that’s true — sort of. Several studies and userexperiments have found that AI chatbots can give more helpful, detailed answerswhen users phrase requests politely or add “please” and “thankyou.” This happens because the AI models, trained on vast amounts of humanconversation, tend to interpret polite language as a cue for more thoughtfulresponses.But prompt engineering expertssay that clear, specific prompts — such as giving context or stating exactlywhat you want — consistently produce much better results than politeness. In other words, politeness isa tactic for people who aren’t very good at prompting AI chatbots. The best way to gettop-quality answers from AI chatbots is to be specific and direct in yourrequest. Always say exactly what you want, using clear details and context. Another powerful tactic issomething called “role prompting” — tell the chatbot to act as aworld-class expert, such as, “You are a leading cybersecurityanalyst,” before asking a question about cybersecurity. This method,proven in studies like SanderSchulhoff’s 2025 review of over 1,500 prompt engineering papers, leads tomore accurate and relevant answers because it tells the chatbot to favorcontent in the training data produced by experts, rather than just lumping theexpert opinion in with the uneducated viewpoints. Also: Give background if itmatters, like the audience or purpose. (And don’t forget tofact-check responses. AI chatbots often lie and hallucinate.)It’s time to up your AI chatbot game.Unless you’re into using AI for fantasy role playing, stop being polite.Instead, use prompt engineering best practices for better results.
Apple is on track to source all the iPhones it sells in the US from India by the end of next year as politically driven tensions drive a wedge between the US and China. This is a major move that the company has been building toward for some time, but the recent tariffs announcements may have accelerated the plan. Designed by Apple in California, Made in India If you’ve been following my work you’ll already be aware of the importance India now has for Apple. India is expected to be the manufacturing hub for 25% of all the iPhones sold globally by the end of the year, and now the Financial Times reports that Apple aims to make 60 million iPhones in India “as soon as next year” — though other sources suggest that target may prove too ambitious. To support this transition, Apple and its partners have invested billions in building their businesses there. Foxconn is currently building its second-largest factory outside China in India. At a cost of $2.5 billion, the facility will create 40,000 jobs and double its manufacturing capacity in India. India’s biggest conglomerate, the Tata Group, also makes iPhones for Apple using facilities formerly owned by Pegatron and Wistron. To support the project, Apple is also encouraging component manufacturers to set up shop in India, with India’s government recently announcing a range of incentives to help encourage them to do so. The idea behind this is, of course, to ensure that the iPhones assembled in India make use of components that are also manufactured there in order to minimize the cost of any tariffs. These investments are accompanied by a range of external improvements, including improved infrastructure. While there have been no major signals to this effect as yet, it is becoming increasingly likely that Apple will eventually commence manufacturing other products in India at some point. This didn’t — and couldn’t — happen overnight What’s important to note is that none of this is happening suddenly. This has already taken years. Apple has been working on its journey to India for almost a decade, presumably since before Apple CEO Tim Cook made his first disclosed visit to the nation. During that visit, Cook stressed that his company intended taking position in India, saying: “We’re not here for a quarter, or two quarters, or the next year, or the next year. We’re here for a thousand years.” Apple had originally intended simply to set up retail stores there and build a business from India’s burgeoning middle class, but very swiftly saw the sense of transitioning some production to the nation, accelerating these plans once the Covid plague threw international supply chains into chaos. In other words, while the company’s move to make iPhones for US market in India may seem sudden, it is something that has taken years. That effort proves that shifting manufacturing to new nations is not an overnight task; it takes time, a lot of investment, an available and accessible skilled workforce, and more. With that in mind, it is foolish to expect manufacturing infrastructure to migrate across territorial boundaries any faster than Apple — with all its advantages — has been able to achieve in India. The overall impact of Apple’s more diversified approach is a decreased reliance on its former manufacturing partner, China, from which India and other nations, including Brazil, Vietnam, and Thailand, are seeing some benefit as Apple is also increasing its manufacturing capabilities in those countries. Doing the business The company’s moves into India aren’t just about tariff avoidance. The decision to base more manufacturing there has helped Apple capture hearts and minds among consumers there, translating into accelerating business results. Driven by the iPhone 16 series, Apple sold three million iPhones for the first time in India in Q1 2025, according to IDC, its largest-ever shipment in the nation. “Apple achieved its best-ever Q1 in India, driven by strong iPhone 16 series momentum,” said Canalys. In other words, it looks as if Apple’s shrewd decision to create business in India will pay a double benefit, giving the company access to a growing and developing economy even as the so-called ‘First World’ economies tumble into existential decline. You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
As AI integration accelerates, businesses are facing widening skills gaps that traditional employment models struggle to address, and so more companies are choosing to hire freelancers to fill the void, according to a new report. The report, from freelance work platform Upwork, claims there’s a major workforce shift as well, with 28% of skilled knowledge workers now freelancing for greater “autonomy and purpose.” An additional 36% of full-time employees are considering switching to freelance, while only 10% of freelancers want to return to traditional jobs, according to the report, which is based on a recent survey of 3,000 skilled, US-based knowledge workers. Gen Z is leading the shift, as 53% of skilled Gen Z professionals already freelance, and they’re expected to make up 30% of the freelance US workforce by 2030. “The traditional 9-to-5 model is rapidly losing its grip as skilled talent chooses flexibility, financial control, and meaningful work over outdated corporate structures,” Kelly Monahan, managing director of the Upwork Research Institute, said in a statement. “Companies that cling to old hiring and workforce models risk falling behind.” US businesses have ramped up their freelance hiring by 260% between 2022 and 2024, according to a report from Mellow.io, an HR platform that manages and pays freelance contractors. Increasingly, US businesses have turned to freelancers overseas — most frequently Eastern Europe — to fill their tech talent void, particularly for web developers, programmers, data analysts, and web designers, according to Mellow.io. “This trend shows no signs of slowing,” Mellow.io’s report stated. “The region offers an unparalleled balance of cost efficiency and highly skilled talent.” The US is a freelancer haven Gig workers, earning through short-term, flexible jobs via apps or platforms, are also thriving, according to career site JobLeads. JobLeads analyzed data from the Online Labour Observatory and the World Bank Group to reveal the countries dominating online gig work. The United States is leading in the number of online freelancers with 28% of the global online freelance market. Software and tech roles dominate in the US, representing 36.4% of freelancers, followed by creative/multimedia (21.1%) and clerical/data entry jobs (18.2%). Globally, Spain and Mexico rank second and third in freelancer share, with 7.0% and 4.6%, respectively. Among full-time online gig workers, 52% have a high school diploma, while 20% hold a bachelor’s degree, according to JobLeads. “The gig economy is booming worldwide, with the number of gig workers expected to rise by over 30 million in the next year alone,” said Martin Schmidt, JobLead’s managing director. “This rapid growth reflects a fundamental shift in how people approach work — flexibility and autonomy are no longer just perks but non-negotiables for today’s workforce.” Gen Z and younger professionals are embracing gig work for its flexibility and control, while businesses gain access to a global pool of skilled freelancers, Schmidt said. “As the sector continues to evolve, both workers and employers need to adapt to a new reality where traditional employment models may no longer meet the needs and expectations of the modern workforce,” he said. Confidence in freelancing is high, with 84% of freelancers and 77% of full-time workers viewing its future as bright, according to Upwork’s report. Freelancers are also seeing more opportunities, with 82% reporting more work than last year, compared to 63% of full-time employees, the report said. Freelance workers generated $1.5 trillion in earnings in 2024 alone, Upwork said. The trend is gaining momentum, particularly among Gen Z, with many full-time employees eyeing independent work, according to Upwork. Freelancers are leading in AI, software, and sustainability jobs, demonstrating higher adaptability and continuous learning, according to the report, which focused exclusively on skilled knowledge workers, not gig workers. It also included “moonlighters,” or workers who have full-time employment but freelance on the side. More than half (54%) of freelancers report advanced AI proficiency compared to 38% of full-time employees, and 29% have extensive experience building, training, and fine-tuning machine learning models (vs. 18% of full-time employees), the report stated. Those who earn exclusively through freelance work report a median income of $85,000, surpassing their full-time employee counterparts at $80,000, the report stated. Upwork’s Future Workforce Index is the company’s first such report, and so it said it is unable to provide freelance employment numbers from previous years that would indicate a rising or falling trend. “However, what we can confidently say, based on multiple studies conducted by the Upwork Research Institute over the past several years, is that freelancing isn’t a passing trend,” an Upwork spokesperson said. “It continues to hold steady and accelerate, emerging as a vital and intentional component of the skilled workforce.” A silver tsunami Emily Rose McRae, a Gartner Resarch senior director analyst, said she’s seeing growing interest in freelancing from professionals who desire more flexible work, oftentimes as a safety net for people who lost their job during economic turmoil “and also as a way to build up a network of clients when starting a new business or looking to expand your small business.” Organizations are also facing an impending “silver tsunami” of older workers retiring and leaving a talent gap in their wake. “Many clients I speak with on this topic are trying to identify the best strategy for addressing this expertise gap, whether it is upskilling more junior employees, bringing retired experts back to serve as freelance mentors or coaches, contracting out critical projects to experts on a freelance market, or even redesigning roles and workflows to reduce the amount of expertise needed,” she said. “Being able to bring past employees back as freelancers can be critical for knowledge management and training,” McRae said. “This is especially critical when increasingly AI tools are being deployed on the basic and repetitive tasks that were previously the training ground used for employees to create a pipeline of future experts within the employee base,” McRae said. Despite the occurrence of layoffs — and sometimes because of them — organizations often face skills gaps exacerbated by the rise of AI, according to Forrester Research. Skills intelligence tools, often powered by AI, can help organizations identify and manage the skills and gaps in their workforce and predict future skill needs, including recommending needed recruiting, upskilling and reskilling, and talent mobility. Companies must also be able to scale up or down rapidly in on-demand talent markets, which include contractors, freelancers, gig workers, and service providers. On-demand talent increases the adaptability of your workforce but works best for non-core functions and for specialized skills that are needed for a limited period. Companies, however, can’t simply replace employees with freelancers without facing significant risks, McRae noted. Freelancers are best used for defined projects with clear deliverables. Using them to do the same work as former employees, without changing the role or workflow, can lead to legal and operational issues, she said. As reliance on non-employees grows, so do risks like worker misclassification, dual employment, compliance problems, and costly mistakes such as rehiring underperforming contractors or overpaying for services. “I’ll see this at organizations that instituted hiring freezes, so business leaders turned to contractors to continue to be able to meet their goals,” she said. “It can also create financial risks — when there isn’t much transparency or data collection going on, organizations may find that they are paying the same contractor service provider or freelancer different rates in different departments, for the same set of tasks.” There’s also a risk that third-party contractors are not vetting temp workers, who may not meet the necessary certifications and trainings to comply with local or national regulations, McRae added. “Or that contractors and freelancers have not been fully offboarded after completing their assignments and still retain access to the organization’s systems and data,” she said.