Normal view

There are new articles available, click to refresh the page.
Before yesterdayLifehacker

YouTube Is Rolling Out Five New Features for Premium Subscribers

27 June 2024 at 17:30

The free version of YouTube works perfectly well (current ad blocker debacle notwithstanding), but if you use it a lot, a Premium subscription is actually a pretty good deal. Not only do you get to skip most ads, download videos for offline playback, and gain access to YouTube Music Premium, you also have the opportunity to try new features before other YouTube users.

Today, YouTube added five new features for YouTube Premium subscribers—two are rolling out widely now, and three more are experimental features you can enable if you wish. Here's what's new.

Jump Ahead

Following a round of testing, YouTube is going ahead with Jump Ahead. The new feature uses AI to analyze the "best" parts of a video, based on where most users scrub to while watching. When you double-tap the player window to skip ahead 10 seconds, you'll now have the option to "Jump Ahead" to this "best" spot.

YouTube says this feature is rolling out first to YouTube for Android, but it will be available on the iOS app for Premium subscribers soon.

Picture-in-picture for YouTube Shorts

Picture-in-picture is a convenient way to watch videos on platforms like YouTube while switching to other apps. Now, Premium subscribers on Android will be able to use PIP for the TikTok-esque YouTube Shorts as well. I suppose this feature will be helpful for Shorts that aren't too short, but I can't really see a strong use case for keeping 15 to 30 second videos in PIP while doing things in other apps.

Smart downloads for Shorts (experimental)

If you opt-in to this experimental feature, YouTube will automatically download new Shorts to your smartphone so you can watch them with or without a connection. The company didn't specify, but I'd guess these Shorts would also work in PIP mode, as well.

Conversational AI (experimental)

YouTube also announced it's bringing back its conversational AI to Android devices. When available, YouTube's AI will appear underneath videos via an "Ask" button. You can ask the bot questions and request similar content to videos you're watching, even while a video is playing. I doubt this is going to be a game-changing AI by any means, but it's an interesting experiment nonetheless.

New watch page UI (experimental)

The one change to the YouTube web app for Premium subscribers is an experimental new watch page. YouTube didn't share much about it, but said the new watch page will make it easier to find new videos and engage with comments.

How to opt-in to YouTube experimental features

If you want to try out these three new experimental features, as well as any other experimental features YouTube is currently testing, head to YouTube's "New" webpage and opt-in.

ChatGPT's Free Mac App Is Actually Pretty Cool

26 June 2024 at 14:30

When OpenAI first rolled out the ChatGPT app for Mac, it was exclusive to ChatGPT Plus subscribers. Unless you paid $20 per month, you needed to stick to the web app or the one on your smartphone. As of Tuesday, however, the Mac app is now free for everyone. And, honestly, you should probably give it a go.

At first glance, OpenAI's Mac app offers the usual ChatGPT experience you're used to. When you log in, you'll find all your previous conversations saved to the sidebar, just as they are in the web and mobile apps. You can type your prompts in the text field, use the mic button to ask questions with your voice, and click the headphones icon to enter Voice mode. (Not the "Her" Voice mode, mind you: That feature has been delayed.) You can also use features like Temporary Chats (conversations that don't pull from your chat history), change your GPT model, generate images with DALL-E, and access GPTs.

A better experience than the web app

But there are some Mac-specific features that make this particular app worth using over the web option. First, in addition to uploading files and photos to ChatGPT, you can take a screenshot of any open window on your Mac directly from the app. If you click on the paperclip icon, and select Take Screenshot, you can select an active window from the pop-up list to share with ChatGPT. (The first time you do this, you'll need to grant the ChatGPT app access to screen recording.)

Alternatively, you can take a screenshot of the window manually, then share it to ChatGPT as an image, but this skips a step and makes the bot feel a bit more integrated with macOS.

using screenshot tool chatgpt for mac
Credit: Jake Peterson

But what's even more convenient, in my opinion, is the ChatGPT "launcher." This launcher is essentially Spotlight search, but for ChatGPT. Using a keyboard shortcut, you can bring up a ChatGPT text field directly over any window you're currently using on macOS to start a conversation with the bot. You'll then be taken to the app to continue chatting. This basically saves you the step of switching out of the current app you're in and starting a new thread in ChatGPT; if you see something on your Mac you want to know more about, you can hit Option + Spacebar, type your query, and get started.

using the shortcut
Credit: Jake Peterson

This launcher also has the same paperclip icon as the app itself, which means you can upload files and take screenshots directly from the shortcut. If you're a ChatGPT power user, this launcher should be a welcome feature. (I don't even use ChatGPT that much, and I really like it.)

Unfortunately, OpenAI is only making the ChatGPT app available on M-series Macs—the machines running Apple silicon. If you have an older Intel-based Mac, you'll still have to head to the web app in order to use ChatGPT on your computer.

If you have a Mac with an M1 chip or newer, you can download the app from OpenAI's download site.

Here's When Google Is Unveiling the Next Pixel

25 June 2024 at 15:30

Another year, another Pixel. It’s no surprise that Google is planning on releasing the Pixel 9, 9 Pro, and Watch 3 at some point this fall. Every tech company refreshes their smartphones at least once a year. What’s surprising is the event is happening earlier than ever in 2024.

As reported by The Verge, Google just sent out invites for its Made by Google hardware event. Google says the event will focus on Google AI, Android, and, of course, the “Pixel portfolio of devices.” While this event is usually held in September, Google is inviting people to an August announcement—Aug. 13, to be specific.

The event kicks off at 10 a.m. PT (1 p.m. ET), which is pretty standard for these tech events. But the advanced date is curious: Why is Google announcing these things a whole month earlier than usual? It’s possible it’s Google’s way of getting around rumors and leaks: Pixels tend to be leaked in their entirety by the time Made by Google rolls around, to the point where anyone keeping up with the rumors knows just about everything Google is announcing.

That said, we do have rumors about the Pixel 9, so that strategy might not be working: According to the leaks, Google is planning to pull an Apple and release four different Pixel models: a 9, a 9 Pro, a 9 Pro XL, and a 9 Pro Fold. It's also expected that the Pixels will come with the G4 Tensor chip, Google latest generation SoC. These devices will replace the current Pixel 8 and Pixel 8 Pro, as the Pixel Watch 3 will replace the Watch 2.

In addition to hardware, Google will share announcements about its latest AI features and developments, as well as Android 15, which is currently in beta testing. It will be interesting to see what the company has planned for these announcements, as their latest AI endeavor, AI Overviews, didn't have the best of rollouts.

Because Google has only sent out invites to the event thus far, we don't know for certain how the company plans to stream the event for the rest of us. However, more than likely, Google will host a live stream of Made by Google on the company's YouTube page. If you want to see these announcements live, tune into YouTube.

Gemini Is Coming to the Side Panel of Your Google Apps (If You Pay)

25 June 2024 at 15:00

If you or your company pay for Workspace, you may have noticed Google's AI integration with apps like Docs, Sheets, and Drive. The company has been pushing Gemini in its products since their big rebrand from "Bard" back in February, and it appears that train isn't stopping anytime soon: Starting this week, you'll now have access to Gemini via a sidebar panel in some of Google's most-used Workspace apps.

Google announced the change in a blog post on Monday, stating that Gemini's new side panel would be available in Docs, Sheets, Slides, Drive, and Gmail—the latter of which the company announced in a separate post. The side panel sits to the right of the window, and can be called up at any time from the blue Gemini button when working in these apps.

Google says the side panel uses Gemini 1.5 Pro, the LLM the company rolled out back in February, equipped with a "longer context window and more advanced reasoning." That longer context window should be helpful when asking Gemini to analyze long documents or run through large sets of data in Drive, as it allows an LLM to handle more information at once in any given request.

Now, if you've ever used a generative AI experience—especially one from Google—this experience probably won't shock you: You'll see a pretty typical welcome screen when Gemini comes up, in addition to a series of prompt suggestions for you to ask the bot. When you pull up the side panel in a Google Doc, for example, Gemini may immediately offer you a summary of the doc, then present potential prompts, such as "Refine," "Suggest improvements," or "Rephrase." However, the prompt field at the bottom of the panel is always available for you to ask Gemini whatever you want.

Here are some of the uses Google envisions for Gemini in the side panel:

  • Docs: Help you write, summarize text, generate writing ideas, come up with content from other Google files

  • Slides: Create new slides, create images for slides, summarize existing presentations

  • Sheets: Follow and organize your data, create tables, run formulas, ask for help with tasks in the app

  • Drive: Summarize "one or two documents," ask for the highlights about a project, request a detailed report based on multiple files

  • Gmail: Summarize a thread, suggest replies to an email, advice on writing an email, ask about emails in your inbox or Drive

gemini in sheets
Credit: Google

None of these features are necessarily groundbreaking (Gemini has been generally available in Workspace since February) but Google's view is they're now available in a convenient location as you use these apps. In fact, Google announced that Gmail for Android and iOS are also getting Gemini—just not as a side panel. But while the company is convinced that adding its generative AI to its apps will have a positive impact on the end user, I'm not quite sold. After all, this is the first big AI development from Google since the company's catastrophic "AI Overviews" rollout. I, for one, am curious if Gemini will suggest that I respond to an email by sharing instructions on adding glue to pizza.

As companies like Google continue to add new AI features to their products, we're seeing the weak points in real time: Do you want to trust Gemini's summary of a presentation in Slides, or an important conversation in Gmail, when AI still makes things up and treats them like fact?

Who can try Gemini side panel in Google apps

That said, not everyone will actually see Gemini in their Workspace apps, even as Google rolls it out. As of now, Gemini's new side panel feature is only available to companies who purchase the Business and Enterprise Gemini add-on, schools that purchase the Education and Education Premium Gemini add-on, and Google One AI Premium subscribers. If you don't pay for Google's top tier subscription, and your business or school doesn't pay for Gemini, you're not seeing Google's AI in Gmail. Depending on who you are, that may be a good or bad thing.

Update Your Pixel Now to Patch This Security Flaw

24 June 2024 at 13:30

Earlier this month, Google issued a security update for its line of Pixel smartphones, issuing patches for 45 vulnerabilities in Android. Security updates aren't as flashy as Feature Drops, and so users might not feel as inspired to update their Pixels right away. This update, however, is one you should install ASAP.

As it turns out, among those 45 patched vulnerabilities, is one particularly dangerous one. The flaw is tracked as CVE-2024-32896, and is an escalation of privilege vulnerability. These flaws can allow bad actors to gain access to system functions they normally wouldn't have permission for, which opens the door to dangerous attacks. While most of these flaws are usually caught before bad actors learn how to exploit them, the situation with CVE-2024-32896 isn't so fortunate: In the security notes for this security update, Google says, "There are indications that CVE-2024-32896 may be under limited, targeted exploitation."

That makes this vulnerability an example of a "zero-day" issue—a flaw that bad actors know how to take advantage of before there a patch is made available to the general public. Every Pixel that doesn't install this patch is left vulnerable to malicious users who know about this issue, and want to exploit it.

Google hasn't disclosed any additional information about CVE-2024-32896, so we don't know much about how it works—that said, it sounds like a particularly nasty vulnerability. In fact, Forbes reports that the United States government has taken note of the issue, and has issued a July 4 deadline for any federal employees using a Pixel: Update your phone, or "discontinue use of the product."

GrapheneOS, who develops an open source privacy-centric OS for smartphones, says that the patch for CVE-2024-32896 is actually the second half of a larger fix: In April, Google patched CVE-2024-29748, and according to GrapheneOS, both were targeted to patch vulnerabilities forensic companies were exploiting.

This Tweet is currently unavailable. It might be loading or has been removed.

How to patch your Pixel

To install this security patch on your Pixel, head to Settings > System > Software update. When the update is available, you can follow the on-screen instructions to install it. Alternatively, you can ask Google Assistant to "Update my phone now."

Eight Apps Apple Could Make Obsolete This Year

21 June 2024 at 15:30

Giant tech companies like Apple are constantly adding new features to their platforms, but they can't do everything. To fill the gaps, we have third-party apps: These developers can hone in on features Apple products either don't have, or don't implement well, and can focus all their efforts on making those features great. It's really a win-win—that is, until Apple decides to take those great ideas and implement them into their platforms for free.

This practice happens so much, there's a name for it: sherlocking. It refers to Apple's search app, Sherlock, which took features from the third-party search app Watson. With every major iOS and macOS update, Apple introduces features that threaten or effectively replace independent programs. This year, there are eight such apps and categories clearly in the crosshairs. In fact, analysts estimate Apple's changes to iOS 18 alone could impact apps that made nearly $400 million last year. But as we'll discuss, just because Apple is introducing these features, that doesn't automatically make these apps obsolete.

Magnet

Wouldn't you know it, but the OS known as "Windows" has traditionally had better window management than macOS. For years now, it's been easy to snap Windows windows into whatever place you want: If you want a window on the left half of the screen, and another on the right, it's easy with either a mouse drag or keyboard shortcut. Apple has added some window management options to macOS, including both in and out of full-screen mode, but it's still far behind the keyboard shortcut-simplicity Windows offers.

That's where third-party apps like Magnet come into play: These utilities basically add Microsoft's window management to macOS: Windows can snap into place with keyboard shortcuts, or by dragging windows to specific corners of the display. For any PC users moving to Mac for the first time, apps like Magnet were a must.

That is, until WWDC, when Apple casually revealed its new window management system for the Mac. It's a simple system: Drag windows to the sides and corners of your display to snap them into place, or use keyboard shortcuts to do the same. But that simple system takes care of the majority of functions people turn to macOS window management utilities for. It's bad enough for the free programs, but considering apps like Magnet cost $4.99, this could definitely hurt the developer.

1Password

Apple has actually had a decent password management system for a while now: In recent years, iCloud Keychain has done enough for me to not consider third-party alternatives, like 1Password or Dashlane. That said, iCloud Keychain's biggest weakness was its lack of centrality: It works great in the background, automatically creating and saving new passwords, and autofilling those passwords when you need them. But when it comes to manually pulling up your credentials, having a full-fledged app definitely improves the experience.

Of course, that's what Apple is doing this year: iCloud Keychain is now an app, called Passwords, that syncs across your Apple devices. Now, you have clear separation for things like passwords, 2FA codes, passkeys, and wifi passwords, and you can access shared password collections as well. However, beyond these much-needed changes, it's still a pretty simple experience. I don't think dedicated password managers are in danger because of this new experience, and existing users will likely stick with their platform of choice for the additional features they offer. But third-party apps will likely need to convince new users why their iPhone and Mac's Passwords app isn't good enough for them (especially as it likely is).

TapeACall

Recording phone calls has sucked on iOS. There was never a built-in way to do it, so you needed to utilize a half-baked workaround in the free Google Voice app (which only worked for incoming calls) or pay a pricey subscription for an app like TapeACall.

Soon, however, call recording won't just be a part of iOS: You'll basically be invited to try it. Apple advertises the feature as another menu option when you're currently in a call: Just hit the record button, and iOS will record everything you and the other caller say. That likely sent a shiver down the spine of TapeACall, whose $10 per month subscription now seems a bit expensive compared to a free update to iOS 18.

That said, Apple is advertising this feature as part of Apple Intelligence, the brand name for the company's big AI features. If that's true, only the iPhone 15 Pro and 15 Pro Max (as well as future iPhones) will be able to run this phone recording feature. That leaves a sizable market for apps like TapeACall to keep marketing to. (Fingers crossed for a price cut, though.)

Grammarly

Speaking of Apple Intelligence, the company's upcoming AI assistant will be happy to help proofread your writing, and rewrite any sentence or paragraph on the fly—whether you're writing on your iPhone, iPad, or Mac.

That must not be great news for companies like Grammarly, which offer solutions across the same set of devices for checking spelling, grammar, and sentence structure as you type. Grammarly has even rolled out AI writing tools in the age of artificial intelligence: At the time, it might have seemed like a competitive move against options like ChatGPT or Gemini. (Why copy and paste your text into a chatbot when a Grammarly extension can do it for you directly in the text field?) But now that Apple also has an AI writing bot on the horizon, the question becomes: Why download the extension?

Of course, just as with the TapeACall conversation, there's going to be a limited audience for Apple's AI features at first. Apple Intelligence is only available on the iPhone 15 Pros and M-series Macs, which means any writers on an Intel Mac will still want to keep their proofreader-of-choice.

Newji

Apple Intelligence is generative AI, which means it has to have an AI art component. Among those new features is the ability to generate new emojis to share in chats. As far as AI art goes, it seems harmless, and even fun, in case the existing emoji options don't quite match the vibe you're going for.

That's kind of a bummer for apps like Newji, though. It basically works exactly like Apple's new feature does: You prompt the AI with what you want your emoji to be (Newji's flagship example is "big rat dragging a pizza slice"), and it generates options for you to choose from. Luckily for Newji, Apple Intelligence is slow-going, and won't be available on most iPhones—at least for now. So, the company has some time before more people start buying Apple Intelligence-compatible iPhones.

AllTrails

New to the Maps app across the entire Apple ecosystem is a set of hiking features: The updates brings downloadable topographical maps to the app, as well as thousands of hikes to save offline. Even when you don't have service, these offline maps and hikes offer turn-by-turn navigation with voice, as if you were pulling from a live directions feed. You can even create your own routes, if you want.

Hmm. Sounds suspiciously similar to AllTrails, doesn't it? Luckily for them, AllTrails has a huge user base already in place, so it can offer more experiences than Apple Maps, at least at the start. But seeing as the iPhone is massively popular in the U.S., the more hikers turn to Apple Maps for hiking, the larger that community could grow. And, unlike some other options on this list, all Apple devices compatible with this year's updates gets these features, as they aren't Apple Intelligence-related. This will be one to watch.

Otter.ai

Transcriptions are another non-Apple Intelligence feature coming to Apple devices this year. (Still powered by AI though.) When you make an audio recording in Voice Memos (or Notes) iOS or macOS will transcribe it for you. It's a big perk: You can quickly review a conversation you recorded, or perhaps a presentation or lecture, and search for a specific topic that was mentioned.

Of course, it's a big perk of services like Otter.ai, too. One might think that Apple's AI transcriptions threatens Otter.ai and its ilk, but this one I see being largely unaffected for now. Otter.ai specifically is so feature-filled and integrated with various work suites in a way that likely will insulate it from Apple's new features here. I see Otter losing the most business from new transcribers, who just want a quick way to review a voice memo. Why bother looking for a solution when the transcription now appears directly with your recording on your iPhone or Mac?

Bezel

Of all the apps on this list, Bezel might be the most in trouble. With macOS 15, Apple is adding iPhone screen mirroring. That means you can wirelessly view and control your iPhone's display from your Mac, all while your iPhone remains locked and put away.

Bezel is undoubtedly the most popular third-party option for mirroring your iPhone's display to your Mac, but it might not be able to compete against macOS Sequoia. For one, Bezel requires a cable, while macOS supports wireless iPhone mirroring. But the larger issue is that Bezel costs $29 for use on one Mac, and $69 for up to three Macs. Meanwhile, Apple's screen mirroring feature is free with an update to macOS 15 on any supported Mac. It's definitely a tough situation for Bezel.

But again, just because Apple adds a new feature to iOS and macOS, that doesn't mean third-party options that offer the same feature are toast. The App Store is filled with apps that sell themselves on features Apple has had baked into its platforms for years, and they succeed by offering a different (or perhaps improved) experience from Apple. I think most of these apps have that same opportunity, but really, it'll come down to what the users want.

Use This Workaround to Send High Quality Photos and Videos on WhatsApp

20 June 2024 at 19:00

WhatsApp might be the most popular chat app in the world, but it hasn’t always been the best for sending photos and videos. The app traditionally had a 16MB limit on any media you sent, and, even still, compressed it to save space. That compression resulted in lower quality images and videos, which is frustrating in a time when smartphones have incredible cameras.

It's getting better, though. Mark Zuckerberg announced last year that WhatsApp supports high-quality photo sharing—although you might have missed the option if you weren’t looking for it. The update didn’t include support for HD videos, however, until the company quietly updated the app a week later.

HD quality is becoming the default

Fast forward to June 2024, and it seems WhatsApp is finally ready to commit to high-quality media: As reported by Android Police, Meta is now rolling out the ability to send high-quality photos and videos by default. That means that, once the update hits your app, your photos and videos should share in HD without you having to do anything. (Previously, you needed to hit the "HD quality" option to trigger this every time, which was frustrating for anyone who wanted to send their media in high quality with each send.)

You can check if you have this setting enabled from Settings > Storage and data > Media upload quality. Make sure "HD quality" is selected. WhatsApp will warn you that HQ quality media may take longer to send, and that it could be up to six times larger, which means it may put more stress and resources on your data plan. With this setting enabled, you should notice the HD option highlighted before you send your photo or video.

HD quality isn't uncompressed

However, “HD” media isn’t exactly what you might think it is. Videos max out at 720p, even if your original video was recorded in 1080p or 4K, which means WhatsApp is still compressing the video quite a lot. Still, it’s better than standard quality, which drops the resolution to around 480p. Likewise, WhatsApp still applies some compression to photos sent via the HD Quality setting, so even still, you won’t be able to send HD photos in their native resolution with this method.

Use this loophole to send full resolution photos and videos on WhatsApp

WhatsApp actually has a better solution for sending high-res content: Rather than send your videos as videos, send them as documents. This has been the best way to send full-res media for a while, as WhatsApp previously had a 100MB limit on documents, and just about anything can be a “document.” Recently, that limit jumped to 2GB per file, which makes it possible to send most (if not all) of your photos and videos in their full resolution to whoever you want in WhatsApp.

To send a video file via this method, open a WhatsApp conversation, tap the attachment icon (Android) or the (+) (iOS), choose “Document,” then choose the files you want to share. WhatsApp will send the files without compression, so you can share your content in its full quality (as long as it’s under 2GB). To preserve the quality of anything larger than 2GB, you’ll need to use another sharing method, like Dropbox or Google Drive.

Update Your Windows PC to Avoid This Wifi Security Flaw

20 June 2024 at 17:00

Microsoft's latest Patch Tuesday update has a series of fixes for bugs in both Windows 10 and Windows 11. One of these vulnerabilities is particularly troubling though, as it allows bad actors to hack your PC so long as their within wifi range.

As reported by The Register, Microsoft patched 49 security flaws with its latest Patch Tuesday update, but there are really three of key interest: The first, which Microsoft says is public (but not exploited), is tracked as CVE-2023-50868 and can allow a bad actor to push your CPU to the point where it stops functioning correctly. The second, CVE-2024-30080, concerns Microsoft Message Queuing: This flaw allows a remote attacker to send a malicious data packet to a Windows system, and execute arbitrary code on that system. This one doesn't necessarily affect individual users as much, but Microsoft did give it a high severity rating, and while it hasn't necessarily been exploited yet, the company thinks exploitation is more than likely. But the last flaw seems most pressing: CVE-2024-30078 is a vulnerability affecting wifi drivers. The company says a bad actor can send a malicious data packet to a machine using a wifi networking adapter, which would allow them to execute arbitrary code. In practice, this could allow someone within wifi range of another user to hack their computer from that wifi connection alone. And since this affects many different versions of Windows, attackers will likely try to exploit this flaw as soon as possible.

It's a chilling concept: If someone learns how to exploit this flaw, they could use it to attack other Windows PCs in their immediate vicinity. Imagine the field day a hacker could have going to a high-density area of laptop users like a coffee shop or shared workspace. Fortunately, the latest security updates for both Windows 10 and Windows 11 patch these issues, so once you're updated, you're safe to return to your office in the corner of the café.

How to install the latest patches on your Windows PC

If you're running Windows 11, head to Start > Settings > Windows Update. On Windows 10, head to Start > Settings > Update & Security > Windows Update. Either way, hit Check for updates. Once available, download and install it on your PC.

Apple’s Explanation for Why You Need an iPhone 15 Pro to Use Apple Intelligence Seems Sus

20 June 2024 at 15:00

"AI for the rest of us." That's how Apple advertises Apple Intelligence on its website, the company's upcoming generative AI experience. The problem is, that tagline only applies if you have the right device: namely, a newer Mac, or a brand-new iPhone.

Apple Intelligence is chock-full of features we haven't seen on iOS, iPadOS, and macOS before. Following in the footsteps of ChatGPT and Gemini, Apple Intelligence is capable of image generation, text workshopping, proofreading, intelligent summaries, as well as enhancing Siri in ways that make the digital assistant, you know, actually assist you.

In order to run these features, Apple is only making Apple Intelligence available on select iPhones, iPads, and Macs. For the latter two categories, it's a rather wide net: Only M-series iPads and Macs can run Apple Intelligence. Sure, that leaves out plenty of the Intel Macs still in use today, as well as the iPads running Apple's A-series chips, but the company has been selling M-series devices since 2020. Many Mac users have adopted to Apple silicon, which means they'll see these AI features when they update to macOS Sequoia in the fall—or, at least, the features Apple has managed to roll out by then.

However, things aren't so liberal on the iOS side of things. Only those of us with an iPhone 15 Pro or 15 Pro Max can run Apple Intelligence when it's available with a future version of iOS 18. That's because Apple requires the A17 Pro chip for running Apple Intelligence on iOS, which the company has only put into these particular iPhones so far. Even the iPhone 15 and 15 Plus, which launched at the same time as the Pros, can't run Apple Intelligence, because they're using the previous year's A16 Bionic chip.

Why Apple Intelligence is only available on newer Apple devices

Apple's stance is that Apple Intelligence is so demanding that it needs to run on the most powerful hardware the company currently has available. A large part of that is the processing power the desktop-class M-series chips have, as well as the minimum 8GB of unified RAM. (The iPhone 15 Pro also comes with 8GB of RAM.) But the main component as far as Apple Intelligence is concerned is likely the Neural Engine: While Apple has included a Neural Engine in all iPhone chips since the A11 Bionic in the iPhone X, 8, and 8 Plus, Apple only started adding a Neural Engine to the Mac with the M1.

That stance is largely reflected in an interview between John Gruber of Daring Fireball and Apple's marketing chief Greg Joswiak. Joswiak had this to say to the question of why older Apple devices couldn't run Apple Intelligence:

So these models, when you run them at run times, it's called inference, and the inference of large language models is incredibly computationally expensive. And so it's a combination of bandwidth in the device, it's the size of the Apple Neural Engine, it's the oomph in the device to actually do these models fast enough to be useful. You could, in theory, run these models on a very old device, but it would be so slow that it would not be useful.

Essentially, Apple feels that a compromised Apple Intelligence experience isn't one worth having at all, and only wants the feature running on hardware that can "handle it." So, no Apple Intelligence for Intel Macs, nor an iPhone other than the 15 Pro.

Apple Intelligence should probably be able to run on more devices

While there is sense to that argument, it's definitely easy to take the cynical view here and assume Apple is trying to push customers into buying a new iPhone or Mac. I don't really think that's the case, but I don't buy the idea that Apple Intelligence can only run on these devices. Keeping Apple Intelligence to the M-series Macs makes the most sense to me: These are the Macs with Apple's Neural Engine, so it's easiest to get these AI features up and running.

It's the iPhone and iPad side of things that rubs me the wrong way. These devices have Neural Engines built into their SoCs. Sure, they might not be as powerful as the Neural Engine in the iPhone 15 Pro (Apple says the A17 Pro's Neural Engine is up to twice as fast as the Neural Engine in the A16) but I have trouble believing an Apple Neural Engine from 2022 isn't fast enough to handle features a chip made in 2023 can. I also wouldn't be surprised if Apple could get Apple Intelligence working well on a higher-end Intel Mac, but at least these devices don't have Neural Engines at all.

Not to mention, not all the processing is going to be happening on-device anyway. When iOS or macOS thinks a process is too intensive for the A17 Pro or M-series chip to handle itself, it outsources that processing to the cloud—albeit, in Apple fashion, as privately as possible. Even if the A16 Bionic can't handle as many local AI processes as the A17 Pro, how much would the experience be downgraded by outsourcing more of those processes to the cloud?

Who wants Apple Intelligence anyway?

But here's the thing: Even if Apple is choosing to omit Apple Intelligence from the iPhone 15 and earlier unnecessarily, I don't think it's to sell more iPhone 15 Pros. I think it simply doesn't want to waste the resources optimizing a feature that doesn't have a ton of demand. Despite ChatGPT's popularity and notoriety, I don't see "more AI" as something most iPhone and Mac customers are looking for in their devices. I think most customers buy a new iPhone or new Mac for the essential features, like keeping up with friends (especially over iMessage), taking solid photos, and using their favorite apps. AI features baked-into the OS could be a plus, but it's tough to say when there's really no precedent yet for consumers purchasing hardware made for AI.

Personally, if I had an Intel Mac or an iPhone 14 Pro that was working fine, I wouldn't see this as a reason to upgrade: Even if Siri sounds more useful now. I think Apple knows that, and doesn't want to waste time on developing these features for older devices. It probably doesn't have the resources for it anyway—the company is staggering the release of key AI features, like Siri's upgrades, so it has time to make sure everything works as it should before committing to the next set of AI options.

While Apple Intelligence might be the feature set grabbing most of the headlines, most people are going to update their iPhones and find other useful changes instead—some indeed powered by AI. You'll have the option to totally customize your Home Screen now, with control over where app icons go and even what they look like. You'll be able to send messages over satellite without a cellular connection, and you'll find new text effects and options in Messages. You'll even be able to mirror your iPhone's display to your Mac, if that's something you want to do.

The point is, there are a lot of new features coming to iPhones and Macs compatible with iOS 18 and macOS 15—even if Apple Intelligence isn't among them. I get Apple's reasoning here, and while I bet the company could run Apple Intelligence on older devices, I don't think you're going to be missing out on much. We'll have to see once Apple Intelligence does actually arrive—one piece at a time.

❌
❌