Cloud Computing solutions, including Software, Infrastructure, Platform, Unified Communications, Mobile, and Content as a Service are well-established and growing. The evolution of these markets will be driven by the complex interaction of all participants, beginning with end customers.
Edge Strategies has conducted over 80,000 interviews in behalf of our clients in both mature and emerging markets with decision-makers across the full cloud ecosystem- including Vendors, Service Provider and End Customer organizations.
Typical projects include:
We provide current, actionable insight into business decision processes across market segments, from SMBs to Large Enterprises. Our work leverages a deep understanding of the business models of key Cloud Ecosystem participants including:
Our experience allows us to get up to speed quickly on new projects. We are experts in designing and conducting quantitative and qualitative research. Based on our focused findings, we work with our clients to make the decisions necessary to gain early success in a variety of markets, including SaaS, IaaS, PaaS, UCaaS, and mobile/device services.
OpenAI is reportedly thinking about developing its own browser with the aim of challenging Google’s dominance in the market, according to The Information. The new browser would have built-in support for Chat GPT and Open AI’s search engine Search GPT. OpenAI representatives have apparently held talks with developers from Conde Nast, Redfin, Eventbrite, and Priceline, but so far no agreements have been signed. Shares of Google’s parent company Alphabet declined on the Nasdaq exchange after the browser plans became public, Reuters reported.
Meta Quest 3 and Quest 3S headset owners will soon gain access to the “full capabilities” of Windows 11 in mixed reality, Microsoft announced at its Ignite conference this week. Users will be able to access a local Windows PC or Windows 365 Cloud PC “in seconds,” Microsoft said in a blog post, providing access to a “private, high-quality, multiple-monitor workstation.” Although it’s already possible to cast a PC desktop to a Quest device, the update should make the process simpler. Microsoft has been working with Meta to bring its apps to the mixed-reality headsets for a while. Last year,the company launched several Microsoft 365 apps on Quest devices, with web versions of Word, Excel and PowerPoint, as well as Mesh 3D environments in Microsoft Teams. At its Build conference in May, Microsoft also announced Windows “volumetric apps” in a developer preview that promise to bring 3D content from Windows apps into mixed reality. Meta is the market leader, with Quest headsets accounting for 74% of global AR and VR headset shipments, according to data from Counterpoint Research. At the same time, Microsoft has rolled back its own virtual and mixed reality plans, recently announcing it will discontinue its HoloLens 2 headset, with no sign of plans for new version in the works. The number of devices sold globally fell in the second quarter of 2024, according to IDC analysts, down 28% year on year. However, IDC predicts the total number of devices sold will grow from 6.7 million units in 2024 to 22.9 million in 2028 as cheaper devices come to market. Using a Quest headset as a private, large or multi-monitor setup makes sense from a productivity persective, said Avi Greengart, founder of research firm Technsponential. Access to all of Windows — rather than just a browser and select Windows 365 apps — adds “a lot of utility.” “Large virtual monitors are a key use case for investing in head-mounted displays, whether that’s a mainstream headset like the Quest 3, a high-end spatial computing platform like the Apple Vision Pro, or a pair of display glasses from XREAL that plug into your phone or laptop,” said Greengart. Several hardware constrains limit the use of Quest devices for work tasks, including display resolution and field of view (the amount of the observable virtual world visible with the device), and the discomfort of wearing a headset for extended periods. Meta’s Quest 3 and 3S devices are more comfortable than Apple’s Vision Pro, but lack the high resolution of the more expensive device. Greengart added that some people — particularly older users — might struggle to focus on small text at a headset’s fixed distance focal length. Those that require vision correction lenses inside the headset can find the edges of the display distorted, he said. “I love working in VR, but compared to a physical multi-monitor setup, it isn’t quite as productive and it gives me a headache,” said Greengart. “That said, I’ve been covering this space for years, and each iteration gets better.”
Once upon a time, we’d say software is eating the planet. It still is, but these days our world is being consumed by generative AI (genAI), which is seemingly being added to everything. Now, Apple’s Siri is on the cusp of bringing in its own form of genAI in a more conversational version Apple insiders are already calling “LLM Siri.” What is LLM Siri? Apple has already told us to expect a more contextually-aware version of Siri in 2025, part of the company’s soon-to-be-growing “Apple Intelligence” suite. This Siri will be able to, for example, respond to questions and requests concerning a website, contact, or anything else you happen to be looking at on your Mac, iPhone, or iPad. Think of it like an incredibly focused AI that works to understand what you are seeing and tries to give you relevant answers and actions that relate to it. That’s what we knew already. What we learn now (from Bloomberg) is that Apple’s AI teams are working to give Siri even more capabilities. The idea is to ensure Apple’s not-so-smart smart assistant can better compete against chatbots like ChatGPT, thanks to the addition of large language models (LLMs) like OpenAI or Gemini already use. What will Smart Siri do? This smarter Siri will be able to hold conversations, and drill into enquiries, just like those competing engines — particularly Advanced Voice Mode on ChatGPT. Siri’s responses will also become more human, enabling it to say, “I have a stimulating relationship with Dr. Poole,” and for you to believe that. These conversations won’t only need to be the equivalent of a visit to the therapist on a rainy Wednesday; you’ll also be able to get into fact-based and research-focused conversations, with Siri dragging up answers and theories on command. In theory, you’ll be able to access all the knowledge of the internet and a great deal of computationally-driven problem solving from your now-much-smarter smartphone. Apple’s ambition is to replace, at least partially, some of the features Apple Intelligence currently hands off to ChatGPT, though I suspect the iPhone maker will be highly selective in the tasks it does take on. The company has already put some of the tools in place to handle this kind of on-the-fly task assignment; Apple Intelligence can already check a request to see whether it can be handled on the device, on Apple’s own highly secure servers, or needs to be handed over for processing by OpenAI or any other partners that might be in the mix. When will LLM Siri leap into action? Bloomberg speculates that this smarter assistant tech could be one of the highlight glimpses Apple offers at WWDC 2025. If that’s correct, it seems reasonable to anticipate the tech will eventually be introduced across the Apple ecosystem, just like Apple Intelligence. You could be waiting a while for that introduction; the report suggests a spring 2026 launch for the service, which the company is already testing as a separate app across its devices. In the run-up to these announcements, Siri continues to develop more features. As of iOS 18.3 it will begin to build a personal profile of users in order to provide better responses to queries. It will also be able to use App Intents, which let third-party developers make the features of their apps available across the system via Siri. ChatGPT integration will make its own debut next month. Will it be enough? Siri as a chatbot is one area in which Apple does appear to have fallen behind competitors. While it seems a positive — at least in competitive terms — that Apple is working to remedy that weakness, its current competitors will not be standing still (though unfurling AI regulation might put a glass ceiling to limit some of their global domination dreams). Apple’s teams will also be aware of work in the background taking place between former Apple designer Jony Ive and Sam Altman’s OpenAI, and will want to ensure it has a moat in place to protect itself against whatever the fruits of that labor turn out to be. With that in mind, Apple’s current approach — to identify key areas in which it can make a difference and to work towards edge-based, private, secure AI — makes sense and is likely to remain the primary thrust of Apple’s future efforts. Though if there’s one net positive every Apple user already enjoys out of the intense race to AI singularity it is that the pre-installed memory inside all Apple devices has now increased. Which means that even those who never, ever, ever want to have a conversation with a machine can get more stuff done quicker than before. Learn more about Apple Intelligence here. You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
The agents are coming, and they represent a fundamental shift in the role artificial intelligence plays in businesses, governments, and our lives. The biggest news in agentic AI happened this month when we learned that OpenAI’s agent, Operator, is expected to launch in January. OpenAI Operator will function as a personal assistant that can take multi-step actions on its own. We can expect Operator to be put to work writing code, booking travel, and managing daily schedules. It will do all this by using the applications already installed on your PC and by using cloud services. It joins Anthropic, which recently unveiled a feature for its AI models called “Computer Use.” This allows Claude 3.5 Sonnet to perform complex tasks on computers autonomously. The AI can now move the mouse, click on specific areas, and type commands to complete intricate tasks without constant human intervention. We don’t know exactly how these tools will work or even whether they’ll work. Both are in what you might call “eta” — aimed mainly at developers and early adopters. But what they represent is the coming age of agentic AI. What are AI agents? A great way to understand agents is to compare them with something we’ve all used before: AI chatbots like ChatGPT. Existing, popular LLM-based chatbots are designed around the assumption that the user wants, expects, and will receive text output—words and numbers. No matter what the user types into the prompt, the tool is ready to respond with letters from the alphabet and numbers from the numeric system. The chatbot tries to make the output useful, of course. But no matter what, it’s designed for text in, text out. Agentic AI is different. An agent doesn’t dive straight away into the training data to find words to string together. Instead, it stops to understand the user’s objective and comes up with the component parts to achieve that goal for the user. It plans. And then it executes that plan, usually by reaching out and using other software and cloud services. AI agents have three abilities that ordinary AI chatbots don’t: 1. Reasoning: At the core of an AI agent is an LLM responsible for planning and reasoning. The LLM breaks down complex problems, creates plans to solve them, and gives reasons for each step of the process. 2. Acting: AI agents have the ability to interact with external programs. These software tools can include web searches, database queries, calculators, code execution, or other AI models. The LLM determines when and how to use these tools to solve problems. 3. Memory Access: Agents can access a “memory” of what has happened before, which includes both the internal logs of the agent’s thought process and the history of conversations with users. This allows for more personalized and context-aware interactions. Here’s a step-by-step look at how AI agents work: The user types or speaks something to the agent. The LLM creates a plan to satisfy the user’s request. The agent tries to execute the plan, potentially using external tools. The LLM looks at the result and decides if the user’s objective has been met. If not, it starts over and tries again, repeating this process until the LLM is satisfied. Once satisfied, the LLM delivers the results to the user. Why AI agents are so different from any other software “Reasoning” and “acting” (often implemented using the ReACT — Reasoning and Acting) framework) are key differences between AI chatbots and AI agents. But what’s really different is the “acting” part. If the main agent LLM decides that it needs more information, some kind of calculation, or something else outside the scope of the LLM itself, it can choose to solve its problem using web searches, database queries, calculations, code execution, APIs, and specialized programs. It can even choose to use other AI models or chatbots. Do you see the paradigm shift? Since the dawn of computing, the users who used software were human beings. With agents, for the first time ever, the software is also a user who uses software. Many of the software tools agents use are regular websites and applications designed for people. They’ll look at your screen, use your mouse to point and click, switch between windows and applications, open a browser on your desktop, and surf the web — in fact, all these abilities exist in Anthropic’s “Computer Use” feature. Other tools that the agent can access are designed exclusively for agent use. Because agents can access software tools, they’re more useful, modular, and adaptable. Instead of training an LLM from scratch, or cobbling together some automation process, you can instead provide the tools the agent needs and just let the LLM figure out how to achieve the task at hand. They’re also designed to handle complex problem-solving and work more autonomously. The oversized impact of the coming age of agents When futurists and technology prognosticators talk about the likely impact of AI over the next decade, they’re mostly talking about agents. AI agents will take over many of the tasks in businesses that are currently automated, and, more impactfully, enable the automation of all kinds of things now done by employees looking to offload mundane, repetitive and complicated tasks to agents. Agents will also give rise to new jobs, roles, and specialties related to managing, training, and monitoring agentic systems. They will add another specialty to the cybersecurity field, which will need agents to defend against cyber attackers who are also using agents. As I’ve been saying for many years, I believe augmented reality AI glasses will grow so big they’ll replace the smartphone for most people. Agentic AI will make that possible. In fact, AI smart glasses and AI agents were made for each other. Using streaming video from the glasses’ camera as part of the multimodal input (other inputs being sound, spoken interaction, and more), AI agents will constantly work for the user through simple spoken requests. One trivial and perfectly predictable example: You see a sign advertising a concert, looking directly at it (enabling the camera in your glasses to capture that information), and tell your agent you’d like to attend. The agent will book the tickets, add it to your calendar, invite your spouse, hire a babysitter and arrange a self-driving car to pick you up and drop you off. Like so many technologies, AI will both improve and degrade human capability. Some users will lean on agentic AI like a crutch to never have to learn new skills or knowledge, outsourcing self-improvement to their agent assistants. Other users will rely on agents to push their professional and personal educations into overdrive, learning about everything they encounter all the time. The key takeaway here is that while agentic AI sounds like futuristic sci-fi, it’s happening in a big way starting next year.