Normal view

There are new articles available, click to refresh the page.
Yesterday — 25 June 2024Main stream

Here's When Google Is Unveiling the Next Pixel

25 June 2024 at 15:30

Another year, another Pixel. It’s no surprise that Google is planning on releasing the Pixel 9, 9 Pro, and Watch 3 at some point this fall. Every tech company refreshes their smartphones at least once a year. What’s surprising is the event is happening earlier than ever in 2024.

As reported by The Verge, Google just sent out invites for its Made by Google hardware event. Google says the event will focus on Google AI, Android, and, of course, the “Pixel portfolio of devices.” While this event is usually held in September, Google is inviting people to an August announcement—Aug. 13, to be specific.

The event kicks off at 10 a.m. PT (1 p.m. ET), which is pretty standard for these tech events. But the advanced date is curious: Why is Google announcing these things a whole month earlier than usual? It’s possible it’s Google’s way of getting around rumors and leaks: Pixels tend to be leaked in their entirety by the time Made by Google rolls around, to the point where anyone keeping up with the rumors knows just about everything Google is announcing.

That said, we do have rumors about the Pixel 9, so that strategy might not be working: According to the leaks, Google is planning to pull an Apple and release four different Pixel models: a 9, a 9 Pro, a 9 Pro XL, and a 9 Pro Fold. It's also expected that the Pixels will come with the G4 Tensor chip, Google latest generation SoC. These devices will replace the current Pixel 8 and Pixel 8 Pro, as the Pixel Watch 3 will replace the Watch 2.

In addition to hardware, Google will share announcements about its latest AI features and developments, as well as Android 15, which is currently in beta testing. It will be interesting to see what the company has planned for these announcements, as their latest AI endeavor, AI Overviews, didn't have the best of rollouts.

Because Google has only sent out invites to the event thus far, we don't know for certain how the company plans to stream the event for the rest of us. However, more than likely, Google will host a live stream of Made by Google on the company's YouTube page. If you want to see these announcements live, tune into YouTube.

Gemini Is Coming to the Side Panel of Your Google Apps (If You Pay)

25 June 2024 at 15:00

If you or your company pay for Workspace, you may have noticed Google's AI integration with apps like Docs, Sheets, and Drive. The company has been pushing Gemini in its products since their big rebrand from "Bard" back in February, and it appears that train isn't stopping anytime soon: Starting this week, you'll now have access to Gemini via a sidebar panel in some of Google's most-used Workspace apps.

Google announced the change in a blog post on Monday, stating that Gemini's new side panel would be available in Docs, Sheets, Slides, Drive, and Gmail—the latter of which the company announced in a separate post. The side panel sits to the right of the window, and can be called up at any time from the blue Gemini button when working in these apps.

Google says the side panel uses Gemini 1.5 Pro, the LLM the company rolled out back in February, equipped with a "longer context window and more advanced reasoning." That longer context window should be helpful when asking Gemini to analyze long documents or run through large sets of data in Drive, as it allows an LLM to handle more information at once in any given request.

Now, if you've ever used a generative AI experience—especially one from Google—this experience probably won't shock you: You'll see a pretty typical welcome screen when Gemini comes up, in addition to a series of prompt suggestions for you to ask the bot. When you pull up the side panel in a Google Doc, for example, Gemini may immediately offer you a summary of the doc, then present potential prompts, such as "Refine," "Suggest improvements," or "Rephrase." However, the prompt field at the bottom of the panel is always available for you to ask Gemini whatever you want.

Here are some of the uses Google envisions for Gemini in the side panel:

  • Docs: Help you write, summarize text, generate writing ideas, come up with content from other Google files

  • Slides: Create new slides, create images for slides, summarize existing presentations

  • Sheets: Follow and organize your data, create tables, run formulas, ask for help with tasks in the app

  • Drive: Summarize "one or two documents," ask for the highlights about a project, request a detailed report based on multiple files

  • Gmail: Summarize a thread, suggest replies to an email, advice on writing an email, ask about emails in your inbox or Drive

gemini in sheets
Credit: Google

None of these features are necessarily groundbreaking (Gemini has been generally available in Workspace since February) but Google's view is they're now available in a convenient location as you use these apps. In fact, Google announced that Gmail for Android and iOS are also getting Gemini—just not as a side panel. But while the company is convinced that adding its generative AI to its apps will have a positive impact on the end user, I'm not quite sold. After all, this is the first big AI development from Google since the company's catastrophic "AI Overviews" rollout. I, for one, am curious if Gemini will suggest that I respond to an email by sharing instructions on adding glue to pizza.

As companies like Google continue to add new AI features to their products, we're seeing the weak points in real time: Do you want to trust Gemini's summary of a presentation in Slides, or an important conversation in Gmail, when AI still makes things up and treats them like fact?

Who can try Gemini side panel in Google apps

That said, not everyone will actually see Gemini in their Workspace apps, even as Google rolls it out. As of now, Gemini's new side panel feature is only available to companies who purchase the Business and Enterprise Gemini add-on, schools that purchase the Education and Education Premium Gemini add-on, and Google One AI Premium subscribers. If you don't pay for Google's top tier subscription, and your business or school doesn't pay for Gemini, you're not seeing Google's AI in Gmail. Depending on who you are, that may be a good or bad thing.

Before yesterdayMain stream

Update Your Pixel Now to Patch This Security Flaw

24 June 2024 at 13:30

Earlier this month, Google issued a security update for its line of Pixel smartphones, issuing patches for 45 vulnerabilities in Android. Security updates aren't as flashy as Feature Drops, and so users might not feel as inspired to update their Pixels right away. This update, however, is one you should install ASAP.

As it turns out, among those 45 patched vulnerabilities, is one particularly dangerous one. The flaw is tracked as CVE-2024-32896, and is an escalation of privilege vulnerability. These flaws can allow bad actors to gain access to system functions they normally wouldn't have permission for, which opens the door to dangerous attacks. While most of these flaws are usually caught before bad actors learn how to exploit them, the situation with CVE-2024-32896 isn't so fortunate: In the security notes for this security update, Google says, "There are indications that CVE-2024-32896 may be under limited, targeted exploitation."

That makes this vulnerability an example of a "zero-day" issue—a flaw that bad actors know how to take advantage of before there a patch is made available to the general public. Every Pixel that doesn't install this patch is left vulnerable to malicious users who know about this issue, and want to exploit it.

Google hasn't disclosed any additional information about CVE-2024-32896, so we don't know much about how it works—that said, it sounds like a particularly nasty vulnerability. In fact, Forbes reports that the United States government has taken note of the issue, and has issued a July 4 deadline for any federal employees using a Pixel: Update your phone, or "discontinue use of the product."

GrapheneOS, who develops an open source privacy-centric OS for smartphones, says that the patch for CVE-2024-32896 is actually the second half of a larger fix: In April, Google patched CVE-2024-29748, and according to GrapheneOS, both were targeted to patch vulnerabilities forensic companies were exploiting.

This Tweet is currently unavailable. It might be loading or has been removed.

How to patch your Pixel

To install this security patch on your Pixel, head to Settings > System > Software update. When the update is available, you can follow the on-screen instructions to install it. Alternatively, you can ask Google Assistant to "Update my phone now."

Eight Apps Apple Could Make Obsolete This Year

21 June 2024 at 15:30

Giant tech companies like Apple are constantly adding new features to their platforms, but they can't do everything. To fill the gaps, we have third-party apps: These developers can hone in on features Apple products either don't have, or don't implement well, and can focus all their efforts on making those features great. It's really a win-win—that is, until Apple decides to take those great ideas and implement them into their platforms for free.

This practice happens so much, there's a name for it: sherlocking. It refers to Apple's search app, Sherlock, which took features from the third-party search app Watson. With every major iOS and macOS update, Apple introduces features that threaten or effectively replace independent programs. This year, there are eight such apps and categories clearly in the crosshairs. In fact, analysts estimate Apple's changes to iOS 18 alone could impact apps that made nearly $400 million last year. But as we'll discuss, just because Apple is introducing these features, that doesn't automatically make these apps obsolete.

Magnet

Wouldn't you know it, but the OS known as "Windows" has traditionally had better window management than macOS. For years now, it's been easy to snap Windows windows into whatever place you want: If you want a window on the left half of the screen, and another on the right, it's easy with either a mouse drag or keyboard shortcut. Apple has added some window management options to macOS, including both in and out of full-screen mode, but it's still far behind the keyboard shortcut-simplicity Windows offers.

That's where third-party apps like Magnet come into play: These utilities basically add Microsoft's window management to macOS: Windows can snap into place with keyboard shortcuts, or by dragging windows to specific corners of the display. For any PC users moving to Mac for the first time, apps like Magnet were a must.

That is, until WWDC, when Apple casually revealed its new window management system for the Mac. It's a simple system: Drag windows to the sides and corners of your display to snap them into place, or use keyboard shortcuts to do the same. But that simple system takes care of the majority of functions people turn to macOS window management utilities for. It's bad enough for the free programs, but considering apps like Magnet cost $4.99, this could definitely hurt the developer.

1Password

Apple has actually had a decent password management system for a while now: In recent years, iCloud Keychain has done enough for me to not consider third-party alternatives, like 1Password or Dashlane. That said, iCloud Keychain's biggest weakness was its lack of centrality: It works great in the background, automatically creating and saving new passwords, and autofilling those passwords when you need them. But when it comes to manually pulling up your credentials, having a full-fledged app definitely improves the experience.

Of course, that's what Apple is doing this year: iCloud Keychain is now an app, called Passwords, that syncs across your Apple devices. Now, you have clear separation for things like passwords, 2FA codes, passkeys, and wifi passwords, and you can access shared password collections as well. However, beyond these much-needed changes, it's still a pretty simple experience. I don't think dedicated password managers are in danger because of this new experience, and existing users will likely stick with their platform of choice for the additional features they offer. But third-party apps will likely need to convince new users why their iPhone and Mac's Passwords app isn't good enough for them (especially as it likely is).

TapeACall

Recording phone calls has sucked on iOS. There was never a built-in way to do it, so you needed to utilize a half-baked workaround in the free Google Voice app (which only worked for incoming calls) or pay a pricey subscription for an app like TapeACall.

Soon, however, call recording won't just be a part of iOS: You'll basically be invited to try it. Apple advertises the feature as another menu option when you're currently in a call: Just hit the record button, and iOS will record everything you and the other caller say. That likely sent a shiver down the spine of TapeACall, whose $10 per month subscription now seems a bit expensive compared to a free update to iOS 18.

That said, Apple is advertising this feature as part of Apple Intelligence, the brand name for the company's big AI features. If that's true, only the iPhone 15 Pro and 15 Pro Max (as well as future iPhones) will be able to run this phone recording feature. That leaves a sizable market for apps like TapeACall to keep marketing to. (Fingers crossed for a price cut, though.)

Grammarly

Speaking of Apple Intelligence, the company's upcoming AI assistant will be happy to help proofread your writing, and rewrite any sentence or paragraph on the fly—whether you're writing on your iPhone, iPad, or Mac.

That must not be great news for companies like Grammarly, which offer solutions across the same set of devices for checking spelling, grammar, and sentence structure as you type. Grammarly has even rolled out AI writing tools in the age of artificial intelligence: At the time, it might have seemed like a competitive move against options like ChatGPT or Gemini. (Why copy and paste your text into a chatbot when a Grammarly extension can do it for you directly in the text field?) But now that Apple also has an AI writing bot on the horizon, the question becomes: Why download the extension?

Of course, just as with the TapeACall conversation, there's going to be a limited audience for Apple's AI features at first. Apple Intelligence is only available on the iPhone 15 Pros and M-series Macs, which means any writers on an Intel Mac will still want to keep their proofreader-of-choice.

Newji

Apple Intelligence is generative AI, which means it has to have an AI art component. Among those new features is the ability to generate new emojis to share in chats. As far as AI art goes, it seems harmless, and even fun, in case the existing emoji options don't quite match the vibe you're going for.

That's kind of a bummer for apps like Newji, though. It basically works exactly like Apple's new feature does: You prompt the AI with what you want your emoji to be (Newji's flagship example is "big rat dragging a pizza slice"), and it generates options for you to choose from. Luckily for Newji, Apple Intelligence is slow-going, and won't be available on most iPhones—at least for now. So, the company has some time before more people start buying Apple Intelligence-compatible iPhones.

AllTrails

New to the Maps app across the entire Apple ecosystem is a set of hiking features: The updates brings downloadable topographical maps to the app, as well as thousands of hikes to save offline. Even when you don't have service, these offline maps and hikes offer turn-by-turn navigation with voice, as if you were pulling from a live directions feed. You can even create your own routes, if you want.

Hmm. Sounds suspiciously similar to AllTrails, doesn't it? Luckily for them, AllTrails has a huge user base already in place, so it can offer more experiences than Apple Maps, at least at the start. But seeing as the iPhone is massively popular in the U.S., the more hikers turn to Apple Maps for hiking, the larger that community could grow. And, unlike some other options on this list, all Apple devices compatible with this year's updates gets these features, as they aren't Apple Intelligence-related. This will be one to watch.

Otter.ai

Transcriptions are another non-Apple Intelligence feature coming to Apple devices this year. (Still powered by AI though.) When you make an audio recording in Voice Memos (or Notes) iOS or macOS will transcribe it for you. It's a big perk: You can quickly review a conversation you recorded, or perhaps a presentation or lecture, and search for a specific topic that was mentioned.

Of course, it's a big perk of services like Otter.ai, too. One might think that Apple's AI transcriptions threatens Otter.ai and its ilk, but this one I see being largely unaffected for now. Otter.ai specifically is so feature-filled and integrated with various work suites in a way that likely will insulate it from Apple's new features here. I see Otter losing the most business from new transcribers, who just want a quick way to review a voice memo. Why bother looking for a solution when the transcription now appears directly with your recording on your iPhone or Mac?

Bezel

Of all the apps on this list, Bezel might be the most in trouble. With macOS 15, Apple is adding iPhone screen mirroring. That means you can wirelessly view and control your iPhone's display from your Mac, all while your iPhone remains locked and put away.

Bezel is undoubtedly the most popular third-party option for mirroring your iPhone's display to your Mac, but it might not be able to compete against macOS Sequoia. For one, Bezel requires a cable, while macOS supports wireless iPhone mirroring. But the larger issue is that Bezel costs $29 for use on one Mac, and $69 for up to three Macs. Meanwhile, Apple's screen mirroring feature is free with an update to macOS 15 on any supported Mac. It's definitely a tough situation for Bezel.

But again, just because Apple adds a new feature to iOS and macOS, that doesn't mean third-party options that offer the same feature are toast. The App Store is filled with apps that sell themselves on features Apple has had baked into its platforms for years, and they succeed by offering a different (or perhaps improved) experience from Apple. I think most of these apps have that same opportunity, but really, it'll come down to what the users want.

Use This Workaround to Send High Quality Photos and Videos on WhatsApp

20 June 2024 at 19:00

WhatsApp might be the most popular chat app in the world, but it hasn’t always been the best for sending photos and videos. The app traditionally had a 16MB limit on any media you sent, and, even still, compressed it to save space. That compression resulted in lower quality images and videos, which is frustrating in a time when smartphones have incredible cameras.

It's getting better, though. Mark Zuckerberg announced last year that WhatsApp supports high-quality photo sharing—although you might have missed the option if you weren’t looking for it. The update didn’t include support for HD videos, however, until the company quietly updated the app a week later.

HD quality is becoming the default

Fast forward to June 2024, and it seems WhatsApp is finally ready to commit to high-quality media: As reported by Android Police, Meta is now rolling out the ability to send high-quality photos and videos by default. That means that, once the update hits your app, your photos and videos should share in HD without you having to do anything. (Previously, you needed to hit the "HD quality" option to trigger this every time, which was frustrating for anyone who wanted to send their media in high quality with each send.)

You can check if you have this setting enabled from Settings > Storage and data > Media upload quality. Make sure "HD quality" is selected. WhatsApp will warn you that HQ quality media may take longer to send, and that it could be up to six times larger, which means it may put more stress and resources on your data plan. With this setting enabled, you should notice the HD option highlighted before you send your photo or video.

HD quality isn't uncompressed

However, “HD” media isn’t exactly what you might think it is. Videos max out at 720p, even if your original video was recorded in 1080p or 4K, which means WhatsApp is still compressing the video quite a lot. Still, it’s better than standard quality, which drops the resolution to around 480p. Likewise, WhatsApp still applies some compression to photos sent via the HD Quality setting, so even still, you won’t be able to send HD photos in their native resolution with this method.

Use this loophole to send full resolution photos and videos on WhatsApp

WhatsApp actually has a better solution for sending high-res content: Rather than send your videos as videos, send them as documents. This has been the best way to send full-res media for a while, as WhatsApp previously had a 100MB limit on documents, and just about anything can be a “document.” Recently, that limit jumped to 2GB per file, which makes it possible to send most (if not all) of your photos and videos in their full resolution to whoever you want in WhatsApp.

To send a video file via this method, open a WhatsApp conversation, tap the attachment icon (Android) or the (+) (iOS), choose “Document,” then choose the files you want to share. WhatsApp will send the files without compression, so you can share your content in its full quality (as long as it’s under 2GB). To preserve the quality of anything larger than 2GB, you’ll need to use another sharing method, like Dropbox or Google Drive.

Update Your Windows PC to Avoid This Wifi Security Flaw

20 June 2024 at 17:00

Microsoft's latest Patch Tuesday update has a series of fixes for bugs in both Windows 10 and Windows 11. One of these vulnerabilities is particularly troubling though, as it allows bad actors to hack your PC so long as their within wifi range.

As reported by The Register, Microsoft patched 49 security flaws with its latest Patch Tuesday update, but there are really three of key interest: The first, which Microsoft says is public (but not exploited), is tracked as CVE-2023-50868 and can allow a bad actor to push your CPU to the point where it stops functioning correctly. The second, CVE-2024-30080, concerns Microsoft Message Queuing: This flaw allows a remote attacker to send a malicious data packet to a Windows system, and execute arbitrary code on that system. This one doesn't necessarily affect individual users as much, but Microsoft did give it a high severity rating, and while it hasn't necessarily been exploited yet, the company thinks exploitation is more than likely. But the last flaw seems most pressing: CVE-2024-30078 is a vulnerability affecting wifi drivers. The company says a bad actor can send a malicious data packet to a machine using a wifi networking adapter, which would allow them to execute arbitrary code. In practice, this could allow someone within wifi range of another user to hack their computer from that wifi connection alone. And since this affects many different versions of Windows, attackers will likely try to exploit this flaw as soon as possible.

It's a chilling concept: If someone learns how to exploit this flaw, they could use it to attack other Windows PCs in their immediate vicinity. Imagine the field day a hacker could have going to a high-density area of laptop users like a coffee shop or shared workspace. Fortunately, the latest security updates for both Windows 10 and Windows 11 patch these issues, so once you're updated, you're safe to return to your office in the corner of the café.

How to install the latest patches on your Windows PC

If you're running Windows 11, head to Start > Settings > Windows Update. On Windows 10, head to Start > Settings > Update & Security > Windows Update. Either way, hit Check for updates. Once available, download and install it on your PC.

Apple’s Explanation for Why You Need an iPhone 15 Pro to Use Apple Intelligence Seems Sus

20 June 2024 at 15:00

"AI for the rest of us." That's how Apple advertises Apple Intelligence on its website, the company's upcoming generative AI experience. The problem is, that tagline only applies if you have the right device: namely, a newer Mac, or a brand-new iPhone.

Apple Intelligence is chock-full of features we haven't seen on iOS, iPadOS, and macOS before. Following in the footsteps of ChatGPT and Gemini, Apple Intelligence is capable of image generation, text workshopping, proofreading, intelligent summaries, as well as enhancing Siri in ways that make the digital assistant, you know, actually assist you.

In order to run these features, Apple is only making Apple Intelligence available on select iPhones, iPads, and Macs. For the latter two categories, it's a rather wide net: Only M-series iPads and Macs can run Apple Intelligence. Sure, that leaves out plenty of the Intel Macs still in use today, as well as the iPads running Apple's A-series chips, but the company has been selling M-series devices since 2020. Many Mac users have adopted to Apple silicon, which means they'll see these AI features when they update to macOS Sequoia in the fall—or, at least, the features Apple has managed to roll out by then.

However, things aren't so liberal on the iOS side of things. Only those of us with an iPhone 15 Pro or 15 Pro Max can run Apple Intelligence when it's available with a future version of iOS 18. That's because Apple requires the A17 Pro chip for running Apple Intelligence on iOS, which the company has only put into these particular iPhones so far. Even the iPhone 15 and 15 Plus, which launched at the same time as the Pros, can't run Apple Intelligence, because they're using the previous year's A16 Bionic chip.

Why Apple Intelligence is only available on newer Apple devices

Apple's stance is that Apple Intelligence is so demanding that it needs to run on the most powerful hardware the company currently has available. A large part of that is the processing power the desktop-class M-series chips have, as well as the minimum 8GB of unified RAM. (The iPhone 15 Pro also comes with 8GB of RAM.) But the main component as far as Apple Intelligence is concerned is likely the Neural Engine: While Apple has included a Neural Engine in all iPhone chips since the A11 Bionic in the iPhone X, 8, and 8 Plus, Apple only started adding a Neural Engine to the Mac with the M1.

That stance is largely reflected in an interview between John Gruber of Daring Fireball and Apple's marketing chief Greg Joswiak. Joswiak had this to say to the question of why older Apple devices couldn't run Apple Intelligence:

So these models, when you run them at run times, it's called inference, and the inference of large language models is incredibly computationally expensive. And so it's a combination of bandwidth in the device, it's the size of the Apple Neural Engine, it's the oomph in the device to actually do these models fast enough to be useful. You could, in theory, run these models on a very old device, but it would be so slow that it would not be useful.

Essentially, Apple feels that a compromised Apple Intelligence experience isn't one worth having at all, and only wants the feature running on hardware that can "handle it." So, no Apple Intelligence for Intel Macs, nor an iPhone other than the 15 Pro.

Apple Intelligence should probably be able to run on more devices

While there is sense to that argument, it's definitely easy to take the cynical view here and assume Apple is trying to push customers into buying a new iPhone or Mac. I don't really think that's the case, but I don't buy the idea that Apple Intelligence can only run on these devices. Keeping Apple Intelligence to the M-series Macs makes the most sense to me: These are the Macs with Apple's Neural Engine, so it's easiest to get these AI features up and running.

It's the iPhone and iPad side of things that rubs me the wrong way. These devices have Neural Engines built into their SoCs. Sure, they might not be as powerful as the Neural Engine in the iPhone 15 Pro (Apple says the A17 Pro's Neural Engine is up to twice as fast as the Neural Engine in the A16) but I have trouble believing an Apple Neural Engine from 2022 isn't fast enough to handle features a chip made in 2023 can. I also wouldn't be surprised if Apple could get Apple Intelligence working well on a higher-end Intel Mac, but at least these devices don't have Neural Engines at all.

Not to mention, not all the processing is going to be happening on-device anyway. When iOS or macOS thinks a process is too intensive for the A17 Pro or M-series chip to handle itself, it outsources that processing to the cloud—albeit, in Apple fashion, as privately as possible. Even if the A16 Bionic can't handle as many local AI processes as the A17 Pro, how much would the experience be downgraded by outsourcing more of those processes to the cloud?

Who wants Apple Intelligence anyway?

But here's the thing: Even if Apple is choosing to omit Apple Intelligence from the iPhone 15 and earlier unnecessarily, I don't think it's to sell more iPhone 15 Pros. I think it simply doesn't want to waste the resources optimizing a feature that doesn't have a ton of demand. Despite ChatGPT's popularity and notoriety, I don't see "more AI" as something most iPhone and Mac customers are looking for in their devices. I think most customers buy a new iPhone or new Mac for the essential features, like keeping up with friends (especially over iMessage), taking solid photos, and using their favorite apps. AI features baked-into the OS could be a plus, but it's tough to say when there's really no precedent yet for consumers purchasing hardware made for AI.

Personally, if I had an Intel Mac or an iPhone 14 Pro that was working fine, I wouldn't see this as a reason to upgrade: Even if Siri sounds more useful now. I think Apple knows that, and doesn't want to waste time on developing these features for older devices. It probably doesn't have the resources for it anyway—the company is staggering the release of key AI features, like Siri's upgrades, so it has time to make sure everything works as it should before committing to the next set of AI options.

While Apple Intelligence might be the feature set grabbing most of the headlines, most people are going to update their iPhones and find other useful changes instead—some indeed powered by AI. You'll have the option to totally customize your Home Screen now, with control over where app icons go and even what they look like. You'll be able to send messages over satellite without a cellular connection, and you'll find new text effects and options in Messages. You'll even be able to mirror your iPhone's display to your Mac, if that's something you want to do.

The point is, there are a lot of new features coming to iPhones and Macs compatible with iOS 18 and macOS 15—even if Apple Intelligence isn't among them. I get Apple's reasoning here, and while I bet the company could run Apple Intelligence on older devices, I don't think you're going to be missing out on much. We'll have to see once Apple Intelligence does actually arrive—one piece at a time.

The Best New Features You Can Use on Microsoft Copilot Right Now

18 June 2024 at 10:00

Microsoft's AI chatbot, Copilot, has been steadily growing and adding new features since its introduction last year. (At that time, Microsoft called it Bing Chat.) As with all things AI, it can be difficult to keep up with the updates, changes, and new features, but Microsoft is adding them to Copilot at a steady clip. Here are some of the best features and changes Microsoft has made to Copilot this year.

Copilot has an app now

If you're still using the Copilot web app, feel free to keep doing so. However, since the beginning of this year, Microsoft has offered Copilot as a dedicated mobile app as well. You can choose to use the experience signed in or signed out, but signing into your Microsoft account gives you access to more features (including bypassing the very strict prompt limit).

Everyone can use Copilot in Microsoft 365 (if you pay)

One of Copilot's flagship features is its integration with Microsoft 365. Microsoft turned the bot into an AI Clippy, adding AI assistant options to apps like Word, Excel, PowerPoint, and OneNote. However, Copilot in 365 was only available to business users—the rest of us that use these apps outside of work were out of luck.

That changed early this year, when Microsoft rolled out Copilot support in Microsoft 365 to all Copilot Pro users. As long as you subscribe to the plan for $20 per month, you can try out Copilot in this suite of apps. While it's a pricey subscription, if you're interested in Copilot, it might be worth the price, since Microsoft is adding most of Copilot's new features to Microsoft 365 apps.

You can use Copilot in Outlook

Previously, if you wanted to use Copilot in Outlook, you needed to head to the web app or go the long way through Microsoft Teams. Since last month, however, Microsoft has offered Copilot support in the Outlook app itself. That makes it easier to use some of the new Copilot features in Outlook, like email draft coaching, choosing the tone of a draft (e.g. neutral, casual, formal).

Reference files when prompting Copilot

Since last month, you've been able to pull in files from your device, SharePoint, and OneDrive when prompting Copilot. If you want the bot to summarize a Word doc, or to have the context of a Powerpoint presentation when responding to your prompt, just type a / when prompting to pull up the file locator.

New options in Word with Copilot

Personally, if there's one app that could benefit most from Copilot, I feel it's Word. Generative AI's main strength in my opinion is text-based, so having an assistant to help you manage your word processing could be a big help.

This year, Microsoft has given Copilot in Word a boost. Here are some of the highlights:

  • Use Rewrite on specific sections of a document.

  • Highlight a portion of text to summarize and share.

  • Create tables from your text.

  • Make new tables based on the format of previous tables in your doc.

  • Confidential docs are labeled as confidential when referencing them in new docs.

New features for Copilot in Excel

Microsoft has been adding new Copilot features to Excel, as well. Since the beginning of this year, here's what you've been able to do:

  • Request a chart of your data.

  • Ask Copilot follow-up questions, including requesting clarifications to previous responses.

  • Generate formula column options with one prompt.

  • Use Copilot to figure out why you're running into issues with a task.

Copilot in OneNote

OneNote actually has had quite a few new Copilot features since January. If you have access to Copilot in OneNote and frequently use the app, here's what you can expect:

  • Create notes from audio recordings and transcriptions, then ask Copilot to summarize the notes and arrange them in different ways.

  • Create to-do lists with Copilot.

  • Copilot can search through information within your organization for added context to your requests.

  • Ask Copilot to organize your notes for you.

Copilot for Teams got an upgrade

If you use Copilot in Teams, you may notice now that the bot can now automatically take notes during meetings. If you head to Recap the meeting, you can get a summary of what your team or the call just talked about.

You may also see a new Copilot option attached to the top of your Teams chats. This lets you quickly prompt Copilot inside chats, pulling in documents with the / key. You'll also see that Teams will alert you when AI is being used in a meeting, such as when Copilot is in use without transcriptions.

Let the AI do the prompting for you

Soon, Copilot will start autocompleting your prompts for you. When you start typing, the bot will offer suggestions for what it thinks you might want to do. If you say "Summarize," before you can say what you want summarized, Copilot will guess what you want to round up, including things like your "last 10 emails."

When It’s OK to Use AI at Work (and When It’s Not)

18 June 2024 at 08:30

This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more here.

Almost as soon as ChatGPT launched in late 2022, the world started talking about how and when to use it. Is it ethical to use generative AI at work? Is that “cheating?” Or are we simply witnessing the next big technological innovation, one that everyone will either have to embrace, or fall behind dragging their feet?

AI is now a part of work, whether you like it or not

AI, like anything else, is a tool first and foremost, and tools help us get more done than we can on our own. (My job would literally not be possible without my computer.) In that regard, there’s nothing wrong, in theory, with using AI to be more productive. In fact, some work apps have fully embraced the AI bandwagon. Just look at Microsoft: The company basically conquered the meaning of “computing at work,” and it's adding AI functionality directly into its products.

Since last year, the entire Microsoft 365 suite—including Word, PowerPoint, Excel, Teams, and more—has adopted “Copilot,” the company’s AI assist tool. Think of it like Clippy from back in the day, only now way more useful. In Teams, you can ask the bot to summarize your meeting notes; in Word, you can ask the AI to draft a work proposal based on your bullet list, then request it tighten up specific paragraphs you aren’t thrilled with; in Excel, you can ask Copilot to analyze and model your data; in PowerPoint, you can ask for an entire slideshow to be created for you based on a prompt.

These tools don’t just exist: They’re being actively created by the companies that make our work products, and their use is encouraged. It reminds me of how Microsoft advertised Excel itself back in 1990: The ad presents spreadsheets as time consuming, rigid, and featureless, but with Excel, you can create a working presentation in an elevator ride. We don’t see that as “cheating” work: This is work.

Intelligently relying on AI is the same thing: Just as 1990's Excel extrapolates data into cells you didn’t create yourself, 2023's Excel will answer questions you have about your data, and will execute commands you give it in normal language, rather than formulas and functions. It’s a tool.

What work shouldn’t you use AI for?

Of course, there’s still an ethical line you can cross here. Tools can be used to make work better, but they can also be used to cheat. If you use the internet to hire someone else to do your job, then pass that work off as your own, that’s not using the tool to do your work better. That’s wrong. If you simply ask Copilot or ChatGPT to do your job for you in its entirety, same deal.

You also have to consider your own company’s guidelines when it comes to AI and the use of outside technology. It’s possible your organization has already established these rules, given AI’s prominence over the past year and a half or so: Maybe your company is giving you the green light to use AI tools within reason. If so, great! But if your company decides you can’t use AI for any purpose as far as work in concerned, you might want to log out of ChatGPT during business hours.

But, let’s be real: Your company probably isn’t going to know whether or not you use AI tools if you’re using them responsibly. The bigger issue here is privacy and confidentiality, and it’s something not enough people think about when using AI in general.

In brief, generative AI tools work because they are trained on huge sets of data. But AI is far from perfect, and the more data the system has to work with, the more it can improve. You train AI systems with every prompt you give them, unless the service allows you to specifically opt out of this training. When you ask Copilot for help writing an email, it takes in the entire exchange, from how you reacted to its responses, to the contents of the email itself.

As such, it’s a good rule of thumb to never give confidential or sensitive information to AI. An easy way to avoid trouble is to treat AI like you would you work email: Only share information with something like ChatGPT you’d be comfortable emailing a colleague. After all, your emails could very well be made public someday: Would you be OK with the world seeing what you said? If so, you should be fine sharing with AI. If not, keep it away from the robots.

If the service offers you the choice, opt out of this training. By doing so, your interactions with the AI will not be used to improve the service, and your previous chats will likely be deleted from the servers after a set period of time. Even so, always refrain from sharing private or corporate data with an AI chatbot: If the developer keeps more data than we realize, and they're ever hacked, you could put your work data in a precarious place.

Four Ways to Build AI Tools Without Knowing How to Code

18 June 2024 at 08:00

This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more here.

There’s a lot of talk about how AI is going to change your life. But unless you know how to code and are deeply aware of the latest advancements in AI tech, you likely assume you have no part to play here. (I know I did.) But as it turns out, there are companies out there designing programs to help you build AI tools without needing a lick of code.

What is the no-code movement?

The idea behind “no-code” is simple: Everyone should have the accessibility to build programs, tools, and other digital services regardless of their level of coding experience. While some take a “low-code” approach, which still requires some coding knowledge, the services on this list are strictly “no-code.” Specifically, they’re no-code solutions to building AI tools.

You don’t need to be a computer scientist to build your own AI tools. You don’t even need to know how to code. You can train a neural network to identify a specific type of plant, or build a simple chatbot to help customers solve issues on your website.

That being said, keep your expectations in check here: The best AI tools are going to require extensive knowledge of both computer science and coding. But it’s good to know there are utilities out there ready to help you build practical AI tools from scratch, without needing to know much about coding (or tech) in the first place.

Train simple machine-learning models for free with Lobe

If training a machine learning model sounds like something reserved for the AI experts, think again. While it’s true that machine learning is a complicated practice, there’s a way to build you own model for free with as few tools as a laptop and a webcam.

That’s thanks to a program called Lobe: The free app, owned by Microsoft, makes it easy to build your own machine learning model to recognize whatever you want. Need your app to differentiate between colors? You can train it to do that. Want to make a program that can identify different types of plants? Train away.

You can see from the example video that you can train a model to identify when someone is drinking from a cup in only a few minutes. While you can include any images you may have previously taken, you can also simply snap some photos of you drinking from a cup from your webcam. Once you take enough sample photos of you drinking and not drinking, you can use those photos to train the model.

You can then test the model to see how well (or not) it can predict if you’re drinking from a cup. In this example, it does a great job whenever it sees the cup in hand, but it incorrectly identifies holding a hand to your face as drinking as well. You can use feedback buttons to tell the model when it gets something wrong, so it can quickly retrain itself based on this information and hopefully make more accurate predictions going forward.

Google also has a similar tool for training simple machine-learning models called Teachable Machine, if you’d like to compare its offering to Microsoft’s.

Build your own AI chatbot with Juji Studio

AI chatbots are all the rage lately. ChatGPT, of course, kicked off the modern AI craze because of its accessible yet powerful chat features, but everything from Facebook Messenger to healthcare sites have used chatbots for years. While OpenAI built ChatGPT with years of expertise, you can make your own chatbot without typing a single line of code.

Juji Studio wants to make building a light version of ChatGPT, in the company’s words, as easy as making PowerPoint slides. The program gives you the tools to build a working chatbot you can implement into your site or Facebook Messenger. That includes controlling the flow of the chatbot, adjusting its personality, and feeding it a Q&A list so it can accurately answer specific questions users might have.

Juji lets you start with a blank canvas, or base your chatbot on one of its existing templates. Templates include customer service bots, job interview bots, teaching assistant bots, and bots that can issue user experience surveys. No matter what you choose, you’ll see the “brains” of your bot in a column on the left side of the screen.

It really does resemble PowerPoint slides: Each “slide” corresponds to a different task for the chatbot to follow. For example, with the customer service chatbot, you have an “invite user questions until done” slide, which is pre-programmed to listen to user questions until the user gives a “done” signal. You can go in and customize the prompts the chatbot will ask the user, such as asking for an account number or email address, or even more personal questions, like asking about a bad experience the user had, or the best part of their day.

You can, of course, customize the entire experience to your needs. You can build a bot that changes its approach based on whether or not the user responds positively or negatively to an opinion-based question:

Build custom versions of Copilot or ChatGPT

Chatbots like Copilot and ChatGPT can be useful for a variety of tasks, but when you want to use AI for a specific function, you'll want to turn to GPTs. GPTs, not to be confused with OpenAI's GPT AI models, are custom chatbots that can be built to serve virtually any purpose. Best of all, there's no coding necessary. Instead, you simply tell the bot what you want, and the service walks you through the process to set up your GPT.

You can build a GPT that helps the user learn a language, plans a meal and teaches you how to make it, or generates logos for different purposes. Really, whatever you want your chatbot to do, you can build a GPT to accomplish it. (Or, at least create a chatbot that's more focused on your task than ChatGPT or Copilot in general.)

You can access Copilot GPTs if you subscribe to Copilot Pro. OpenAI used to lock its GPTs behind a subscription, but the company is making them free for all users. Plus, OpenAI lets users put their custom-built GPTs on the GPT Store. If you don't want to make your own, you can browse other users' creations and try them out for yourself.

Create anything you want with Bubble

For the ultimate no-code experience, you’ll want to use a tool like Bubble. You use an interface similar to something like Photoshop to build your app or service, dragging and dropping new UI elements and functions as necessary.

But while Bubble is a no-brainer for us code-illiterates to build things, it’s also integrated with AI. There are tons of AI applications you can include in your programs using Bubble: You can connect your builds to OpenAI products like GPT and DALL-E, while at the same time taking advantage of plugins make by other Bubble members. All of these tools allow you to build a useful AI program by yourself—something that uses the power of GPT without needing to know how it works in the first place.

One of the best ways to get started here is by taking advantage of OpenAI Playground. Playground is similar to ChatGPT, in that it’s based on OpenAI’s large language models, but it isn’t a chatbot. As such, you can use Playground to create different kinds of products and functions that you can then easily move to a Bubble project using the “View Code” button.

A Brief History of AI

18 June 2024 at 07:00

This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more here.

You wouldn’t be blamed for thinking AI really kicked off in the past couple years. But AI has been a long time in the making, including most of the 20th century. It's difficult to pick up a phone or laptop today without seeing some type of AI feature, but that's only because of working going back nearly one hundred years.

AI’s conceptual beginnings

Of course, people have been wondering if we could make machines that think for as long as we’ve had machines. The modern concept came from Alan Turing, a renowned mathematician well known for his work in deciphering Nazi Germany’s “unbreakable” code produced by their Enigma machine during World War II. As the New York Times highlights, Turing essentially predicted what the computer could—and would—become, imagining it as “one machine for all possible tasks.”

But it was what Turing wrote in “Computing Machinery and Intelligence” that changed things forever: The computer scientist posed the question, “Can machines think?” but also argued this framing was the wrong approach to take. Instead, he proposed a thought-experiment called “The Imitation Game.” Imagine you have three people: a man (A), a woman (B), and an interrogator, separated into three rooms. The interrogator’s goal is to determine which player is the man and which is the woman using only text-based communication. If both players were truthful in their answers, it’s not such a difficult task. But if one or both decides to lie, it becomes much more challenging.

But the point of the Imitation Game isn’t to test a human’s deduction ability. Rather, Turing asks you to imagine a machine taking the place of player A or B. Could the machine effectively trick the interrogator into thinking it was human?

Kick-starting the idea of neural networks

Turing was the most influential spark for the concept of AI, but it was Frank Rosenblatt who actually kick-started the technology’s practice, even if he never saw it come to fruition. Rosenblatt created the “Perceptron,” a computer modeled after how neurons work in the brain, with the ability to teach itself new skills. The computer has a single layer neural network, and it works like this: You have the machine make a prediction about something—say, whether a punch card is marked on the left or the right. If the computer is wrong, it adjusts to be more accurate. Over thousands or even millions of attempts, it “learns” the right answers instead of having to predict them.

That design is based on neurons: You have an input, such as a piece of information you want the computer to recognize. The neuron takes the data and, based on its previous knowledge, produces a corresponding output. If that output is wrong, you tell the computer, and adjust the “weight” of the neuron to produce an outcome you hope is closer to the desired output. Over time, you find the right weight, and the computer will have successfully “learned.”

Unfortunately, despite some promising attempts, the Perceptron simply couldn’t follow through on Rosenblatt’s theories and claims, and interest in both it and the practice of artificial intelligence dried up. As we know today, however, Rosenblatt wasn’t wrong: His machine was just too simple. The perceptron’s neural network had only one layer, which isn’t enough to enable machine learning on any meaningful level.

Many layers makes machine learning work

That’s what Geoffrey Hinton discovered in the 1980s: Where Turing posited the idea, and Rosenblatt created the first machines, Hinton pushed AI into its current iteration by theorizing that nature had cracked neural network-based AI already in the human brain. He and other researchers, like Yann LeCun and Yoshua Bengio, proved that neural networks built upon multiple layers and a huge number of connections can enable machine learning.

Through the 1990s and 2000s, researchers would slowly prove neural networks’ potential. LeCun, for example, created a neural net that could recognize handwritten characters. But it was still slow going: While the theories were right on the money, computers weren’t powerful enough to handle the amount of data necessary to see AI’s full potential. Moore’s Law finds a way, of course, and around 2012, both hardware and data sets had advanced to the point that machine learning took off: Suddenly, researchers could train neural nets to do things they never could before, and we started to see AI in action in everything from smart assistants to self-driving cars.

And then, in late 2022, ChatGPT blew up, showing both professionals, enthusiasts, and the general public what AI could really do, and we’ve been on a wild ride ever since. We don’t know what the future of AI actually has in store: All we can do is look at how far the tech has come, what we can do with it now, and imagine where we go from here.

Living with AI

To that end, take a look through our collection of articles all about living with AI. We define AI terms you need to know, walk you through building AI tools without needing to know how to code, talk about how to use AI responsibly for work, and discuss the ethics of generating AI art.

Here's When Apple Plans to Roll Out Its Biggest Apple Intelligence Features

17 June 2024 at 15:30

Apple made a splash during last week's WWDC keynote when it announced Apple Intelligence. It's the company's official foray into the trendy AI features most tech companies have adopted already. While Apple Intelligence might have generated the most headlines over the past week, many of its main features will not be present when you update your iPhone, iPad, or Mac this fall.

According to Bloomberg's Mark Gurman, Apple is staggering the rollout of these highly-anticipated AI features. A key reason is, simply, these features just aren't ready yet. Apple has been scrambling for over a year to implement generative AI features in its products, after the tech exploded in late 2022. (Thanks, ChatGPT.) Many of these features are quite involved, and will take more time to get right.

That said, Apple probably could release these features sooner and in larger batches if it wanted to, but there's a strategy here: By rolling out big AI features in limited numbers, Apple can root out any major issues before adding more AI to the mix (AI hallucinates, after all), and can continue to build up its cloud network without putting too much pressure on the system. It helps that the company is keeping these features to a specific, small pool of Apple devices: iPhone 15 Pro and 15 Pro Max (and likely the iPhone 16 line), as well as M-Series Macs and iPads.

Apple Intelligence in 2024

If you installed the iOS 18 or macOS 15 beta right now, you might think no Apple Intelligence features were going to be ready in the fall. That's because Apple is delaying these AI features for beta testers until sometime this summer. As the public beta is scheduled to drop in July, it seems like a safe assumption that Apple is planning on dropping Apple Intelligence next month. Again, we don't know for sure.

There are some AI features currently in this first beta, even if they aren't strictly "Apple Intelligence" features: iOS 18 supports transcriptions for voice memos as well as enhanced voicemail transcriptions, and supports automatically calculating equations you type out. It's a limited experience, but seeing as it's only the first beta, we'll see more features soon.

In fact, Apple currently plans to roll out some flagship features with the first release of Apple Intelligence. That includes summaries for webpages, voice memos, notes, and emails; AI writing tools (such as rewriting and proofreading); and image generation, including the AI-generated emojis Apple is branding "Genmoji." You'll also receive AI summaries of notifications and see certain alerts first based on what the AI thinks is most important.

In addition, some of Siri's new updates will be out with iOS 18's initial release. This fall, you should notice the assistant's new UI, as well as the convenient new option for typing to Siri. But most of Siri's advertised features won't be ready for a while. (More on that below.)

The timeline for ChatGPT integration is also a bit up in the air: It may not arrive with the first release of iOS 18 in the fall, but Gurman believes it'll be here before the end of the year. For developers, Xcode's AI assistant, Swift Assist, is likely not out until later this year.

Apple Intelligence's new Siri won't be here until 2025

The largest delay appears to be to Siri's standout upgrades, many of which won't hit iOS and macOS until 2025. That includes contextual understanding and actions: The big example from the keynote was when a demonstrator asks Siri when her mom's flight is getting in, and the digital assistant is able to answer the question by pulling data from multiple apps. This "understanding" that would power many convenient actions without needing to explicitly tell Siri what you want it to do, needs more time to bake.

In addition, Apple is taking until next year for Siri's ability to act within apps from user commands. When available, you'll be able to ask Siri to edit a photo then add it to a message before sending it off. Siri will actually feel like a smart assistant that can do things on your iPhone, iPad, and Mac for you, but that takes time.

Siri also won't be able to analyze and understand what's happening on your screen until 2025. Next year, you should be able to ask Siri a simple question based on what you're doing on your device, and the assistant should understand. If you're trying to make movie plans with someone to see Inside Out 2, you could ask Siri "when is it playing?" and Siri should analyze the conversation and return results for movie times in your area.

Finally, Apple Intelligence remains English-only until at least next year. Apple needs more time to train the AI on other languages. As with other AI features, however, this is one that makes a lot of sense to delay until it's 100% ready.

AI might be the focus of the tech industry, but big AI features often roll out to disastrous ends. (Just look at Google's AI Overviews or Microsoft's Recall feature.) The more time Apple gives itself to get the tech right, the better. In the meantime, we can use the new features that are already available.

How to Watch the Latest Nintendo Direct

17 June 2024 at 12:00

Nintendo is back with some news: The company just announced a new Nintendo Direct in a post on X (formerly Twitter). According to the post, this event will focus on Nintendo Switch games slated for release in the second half of 2024, but beyond that, we don't know much else.

Before you get your hopes up, no, this event will not reveal any information about the Nintendo Switch 2. That's not speculation, either: Nintendo said as much in their announcement post, directly stating, "There will be no mention of the Nintendo Switch successor during this presentation."

It's a smart move on the company's part: Nintendo undoubtedly knows the gaming community's collective focus is on the Nintendo Switch 2, and following Nintendo's president's confirmation of the console's existence last month, it would make some sense for Nintendo to acknowledge it in a new Direct. Squashing those expectations early means fans can go into this event without being disappointed by the lack of Switch 2 updates.

But what is Nintendo actually going to announce, here? The Switch subreddit is full of guesses: Some hope Nintendo will finally announce Switch ports for Wind Waker HD and Twilight Princess HD, the two remastered Zelda games from the Wii U still not on the company's latest console. Others hope for Metroid Prime news, whether that's remastered versions of the second and third Prime games, or the long-awaited fourth game in the series. Maybe there will be more retro games added to Nintendo Switch Online, or a brand-new top-down Zelda game, which would be the first in the series since 2013's A Link Between Worlds on 3DS.

Of course, this is all purely speculation: Now that we're heading into the last year of the OG Switch, there's really no telling what Nintendo will do here. We'll just have to wait and see.

How to watch the latest Nintendo Direct

Nintendo is holding its latest Direct event on Tuesday, June 18 at 7 a.m. PT (10 a.m. ET). The event will last for about 40 minutes, so block off your schedule until 7:40/10:40.

You can tune in from Nintendo's official YouTube page, or click the video below to stream from this article.

iOS 18's Satellite Messaging Is a Game Changer

14 June 2024 at 12:30

With the iPhone 14, Apple introduced a new way to communicate: Emergency SOS via Satellite. With it, you can reach out to emergency services even when you have no signal. The feature guides you on how to connect your iPhone to the nearest satellite overhead, and once connected, allows you to contact help (albeit, much more limited and slowly than usual).

It's a fantastic safety feature, both for those who frequent areas of low cellular coverage, as well as in emergencies when cell service is unavailable. But that latter point is really the main downside of the feature: It's only available for emergencies. If you don't have any service and you're perfectly safe, you can't use the feature to simply send a message to a friend or family member to check in. Unless you want to get the police involved in your update, you'll just have to wait until you're back within range of a cell signal or wifi.

Messages via satellite

That changes with iOS 18: Apple's upcoming OS (currently in beta testing) includes an update to its satellite communications feature. When it drops, you'll be able to send, via satellite, any message, not just emergency ones. So, when you happen to be totally without service, not only can you send an update letting people you're okay, you can keep up with your chats as you normally would.

When it comes to iMessage, almost nothing about the experience is compromised. You'll be able to send and receive messages, emojis, and Tapbacks (the reactions such as "thumbs up" or "Ha Ha"). Plus, all of your messages are still end-to-end encrypted, so there's no security breach using satellites for relaying your messages vs. cell towers or the internet. You don't need to do anything special to trigger the feature, either: Once your iPhone loses a network connection, and switches to "SOS only," you'll see a notification on the Lock Screen inviting you to message via satellite. You don't even need to tap this alert, though. Just start typing a message, and if there's no service, your iPhone will send it via satellite automatically.

You'll know this is happening, because there will be a "Satellite" tag next to the "iMessage" tag in the text field in your thread. You might also be clued in because some messages may take quite a while to send and receive, as they're beaming up to a satellite first before being routed to their destination. As with Emergency SOS via Satellite, iOS will guide you on angling your iPhone towards the nearest satellite overhead. You'll need a clear view of the sky, with few (if any) tall obstructions, including trees and buildings. Assuming conditions are correct, however, you'll be able to message away.

iMessages will come in automatically, even over satellite, so while you might not keep up with the messages as quickly as you normally would, they'll all eventually arrive. However, SMS texts will only work if you initiate the conversation: If an Android friend texts you while you're out of service, for example, you won't receive it. But if you send a message, you'll receive their direct response.

Unfortunately, the feature doesn't support RCS, the texting protocol iOS 18 is finally adopting. While mildly disappointing, the feature itself is so cool I can completely overlook RCS' omission. Lack of service is no longer a hindrance to missing out on communications. You won't drive through a remote road and receive a barrage of missed iMessages once you reconnect to service: Those messages will still appear on your iPhone as they were sent. You can take a trip somewhere without internet and still be able to give updates to people about your experience.

Of course, if you're the kind of person that enjoys these little breaks from society, there's always the foolproof solution: turning off your iPhone altogether.

A Glitch Has Permanently Enabled Motion Smoothing on (Some) Roku TVs

13 June 2024 at 17:30

It would appear that some Roku TVs are now self-sabotaging their owners. And by that, I mean they're enabling motion smoothing without any way to turn it off.

Between this complaint on the Roku subreddit, this thread on Roku's customer forums, and reports from staffers of The Verge, this issue appears to be more than a one-off, even if it isn't necessarily widespread. The problem is occurring on TCL TVs running Roku OS 13: Reportedly, these TVs started using motion smoothing out of nowhere, with no notice, nor any option to disable the feature. Users looking at both standard settings or picture settings cannot find a "motion smoothing" option to turn off the feature.

This bug isn't going unaddressed: In the forums complaint, a Roku community mod confirmed the company is investigating the issue, and included the standard instructions for disabling motion smoothing.

What is motion smoothing, and why is it bad?

Motion smoothing (or "Action Smoothing," as Roku calls it) is a feature on HD and 4K TVs that essentially adds new frames to whatever content you're watching. Video is made up of individual pictures, or frames, and most shows and movies run at 24 or 30 frames per second. With motion smoothing, your TV analyzes the content and creates extra frames on the fly to smooth out the motion of the image. Some content, like live sports and video games, are made better by additional frames, since it can help keep track of fast-moving action. However, almost always, artificially adding the frames makes the image look worse, not better.

It's particularly bad when watching shows and movies: Doubling the frame rate and making the motion smoother is what gives this content the "soap opera" effect. Soap operas are filmed in a higher frame rate than most other shows and movies, so when you double the frame rate of 24 or 30 fps content, it looks like daytime television. That isn't a compliment.

How to disable Action Smoothing on Roku TVs

While some users experiencing this motion smoothing bug won't see the option to disable Action Smoothing at this time, other Roku TV users can control the setting.

First, press the Star button on your Roku TV remote while watching something, then scroll down and choose Advanced Picture Settings. Here, you should be able to control the Action Smoothing settings. Roku says that if you can't see this setting, your TV doesn't support the feature. That's likely not comforting to users watching the new season of House of the Dragon as Game of Thrones' first soap opera.

YouTube Is Experimenting With a Way to Kill Ad Blockers for Good

13 June 2024 at 15:00

YouTube is getting aggressive in its fight against ad blockers. According to the developer of SponsorBlock, an extension that automatically skips ahead of sponsored content in videos, YouTube is now experimenting with "server-side ad injection."

This is quite the escalation. In short, server-side ad injection means YouTube is adding advertisements to the video stream itself. Currently, the company delivers its ads to users as a separate video before the video you chose to watch. That allows ad blockers to identify the ad, stop it from playing, and load your video directly. If the ad is part of the video, however, the traditional ad blocker strategy breaks.

Even though SponsorBlock isn't an ad blocker, this change would break its services, too, as adding ads to the video itself throws off the timestamps of the video. SponsorBlock relies on these timestamps to skip ahead of sponsored segments: As ads vary in length and number, timestamp changes will be unpredictable, and tools like SponsorBlock won't work as they're currently designed.

It's the latest development in the running battle between YouTube and third-party ad blockers. While YouTube has always dissuaded viewers from using ad blockers, the company started cracking down on the tool last year: When using certain ad blockers in some browsers, users saw a pop-up warning them to disable their ad blocker. If they continued to use their ad blocker, they may find that YouTube wouldn't load for them at all. Even if YouTube didn't block videos entirely, the site might artificially slow down load times, or skip to the end of the video. YouTube has also gone after third-party clients with ad blockers built-in, so those are no longer a reliable alternative.

This new server-side injection strategy is not official policy yet, and is only reportedly in testing, but it's clear YouTube isn't backing down here—and it's not difficult to understand why. YouTube's main source of revenue, as with many corners of the internet, is advertising. By using an ad blocker, users block both YouTube and its creators from generating money from views.

Of course, using the internet without an ad blocker is a bit miserable, and has been for years. With the concerning rise of malicious advertising, too, using an ad blocker is actually good cybersecurity practice. Hell, even the FBI recommends you use one.

For YouTube, there's a clear solution: YouTube Premium. If you subscribe, you can watch YouTube mostly ad-free, without worrying about using an ad blocker that will break your experience with the site. While avid YouTube fans might find value in the service, as it also comes with YouTube Music, casual YouTube users might balk at adding another subscription to their ever-growing list of streaming services. There is a one-month free trial, so you can try it out without financial commitment. And if you are interested, YouTube now offers the following plans:

  • Individual: $13.99 per month, or $139.99 per year (saves $27.89)

  • Family: $22.99 per month, for you plus five others in your household

  • Student: $7.99 per month

How to Tell If a Prime Day Promotion Is Just Hype

12 June 2024 at 18:00

Prime Day is just around the corner. For two days in July, you’ll find promotions on products from companies both big and small, all vying for your clicks and your wallet. Many of these will claim to be great deals, and that not buying the item during Prime Day will mean you miss out on some big savings. But there are a few strategies you can use to quickly figure out whether that “amazing deal” really is all that.

How to tell a good Prime Day price from a bad one

One of the best things you can do to tell if a Prime Day deal is legit is to employ the use of a price tracker. These sites and tools keep tabs on the prices for any given product across the many different stores and vendors where it is sold, in order to give you the best possible price, as well as show you whether that current “deal” really is that much lower than the original price or other deals that are out there.

A common technique to make deals look good is to pump up the price of the product: That way, when the company slashes the price for something like Prime Day, it can claim a large discount, even if the overall price tag isn’t much lower than the original price (if it's lower at all). If something originally costs $60, a company can raise the price to $75, then cut it back down to $60, claiming it took 20% off. It’s accurate, but scummy, so watch out for it.

You can use a browser extension like Keepa to watch a product's price history. But other trackers, like Honey or Capital One Shopping, can help you find prices and price histories for items across multiple stores. Their browser extensions are especially useful: If there’s another store selling the same product you’re looking at on Amazon for less, you’ll get a pop-up letting you know, with a direct link to that store’s product page.

Knowing whether something is a good deal isn’t all about getting the best price, though. Sure, Honey might have confirmed this item isn’t any cheaper elsewhere on the web, but there’s more than just the general price tag to consider.

Amazon’s own products will have the best deals

It’s Amazon Prime Day, after all. The company is here to sell as much inventory as it can, but it’s happiest if you’re buying Amazon products from Amazon. As such, the best tech deals are likely going to be with Amazon’s own line of gadgets. Of course, just because an Amazon product is massively on sale, doesn't make it a "good deal." If you wanted a different brand over Amazon's, or if you just want to make sure you're getting the best version of a product, make sure to compare offerings from different companies, too.

Make sure you’re not buying an old piece of tech

I’m a big believer in old tech: I think we should be holding onto our devices for longer than many of us do. However, I don’t think companies should sell you old tech as if it were new, especially when new tech is right around the corner.

Amazon is actually sometimes helpful here: If you’re looking at an outdated version of a product, Amazon lets you know, and gives you a link to the current version of that device. However, that’s only true if Amazon carries that new version of the device or if there’s a direct successor to that product. Lines are blurred these days: Last year’s device isn’t necessarily obsolete just because there’s a new version out, so Amazon doesn’t always try to sell you on the newer product.

And that can be fine! Last-generation laptops, tablets, smartwatches, and phones have their place: Tech is advancing so rapidly that it can be frugal and practical to buy older tech that still works well. But Amazon telling you to buy something that won’t be able to update to the latest software later this year isn’t right. If you’re looking to buy a piece of tech on Prime Day, research is your friend. It’s more than OK to buy something that came out last year or the year before; what matters more is making sure the product still works as it should in 2024, and if it’ll last as long as you’d reasonably expect it to.

If the reason a device is such a good price is because it’s obsolete, that’s not a good deal.

Not everything that is “cheap” is good

On a similar note, be wary of cheap tech that simply isn’t very good. It might be affordable, but if it doesn’t work well, it’s not worth the cost.

Often, this issue arrises with the many brands you’ve never heard of selling items for pennies compared to other companies. Sure, you could save some money and go with these brands, but what about the long-term investment? After Amazon’s 30-day return policy is up, you’re sunk without a customer support channel, something many of these tiny companies lack themselves.

On the other hand, you might have heard of the brand, but the product itself just isn’t very good. It might seem like a steal to get a giant 65-inch 4K TV for $500, but if the picture quality is poor, was that really worth it? (No.)

Read the reviews (not on Amazon, if you can help it)

One way to make sure that TV is worth its steep price cut, or whether those cheap headphones are going to pass the listen test, is to read reviews for the products you’re considering buying. I’m not talking about Amazon reviews, either: Amazon’s ratings can be helpful, but they can also be compromised. Sometimes the reviews don’t even match the product they’re supposed to be talking about, which doesn’t bode well for the integrity of the review. And in the age of AI, you can never be too sure who's writing that customer review in the first place.

When it comes to tech, the best approach is to listen to the reviewers with technical experience, who put these products through their paces before issuing an opinion. An outlet like our sister site PCMag will help you figure out pretty quickly whether that TV is really worth the hype, and they show their work so you can understand how they came to their conclusions.

At the end of the day, it’s all about taking your time and doing your research—the opposite of Amazon’s “BUY IT NOW” strategy. Fight the urge to buy something on impulse, and make sure your money is going toward the best possible product for your needs.

❌
❌