close

Technology

Technology

Apple has solved one of the worst aspects of the MacBook Air with the new M3 variant

Apple has addressed one of the most contentious issues with the base model of its MacBook Air computers, with the new M3-powered version improving the slow storage seen on its predecessor with the M2 chip.

9 to 5 Mac noted that GregsGadgets on X (previously Twitter) conducted some testing and discovered that the MacBook Air M3’s SSD on the entry-level model had returned to normal speed.

For those who don’t remember, when the MacBook Air M2 was released, it was discovered to have substantially slower storage than the M1 (as was the MacBook Pro M2), albeit only with the lowest spec model listed (the base M2 with a 256GB SSD).

The issue was that Apple shifted to a single 256GB NAND module in the drive, rather than two 128GB storage chips as seen in the M1, which meant that storage was substantially slower (almost half the performance, in fact – which was why the change was so controversial).

However, as proven by a breakdown undertaken by Max Tech on YouTube, Apple has reverted back to two NAND chips rather than one in the entry-level edition of the base MacBook Air M3, restoring SSD read and write speeds to normal. (This refers to the performance of the M1 model, as revealed by GregsGadgets).


Analysis: Split decision

Apple appears to have taken note of the controversy surrounding the switch to a single NAND chip for the SSD in the MacBook Air M2 basic model, and the company has been certain not to make the same error with the M3. We hope so, too, and this addresses one of the most common complaints regarding the entry-level M2 version of the MacBook Air.

Why was Apple doing this in the first place? As you might expect, it’s likely to be related to the cost of the SSD, with running with a single chip being a less expensive solution, allowing Apple to keep production costs down slightly. The reduction in the bill-of-materials (or BoM) for the M2 spin came at a high expense PR-wise, it’s hardly unexpected that Apple has back to the two-chip arrangement.

Why are two chips faster? Because NAND modules may process jobs in parallel, resulting in significant performance improvements, whereas a single chip plainly lacks this capability.

Apple has also listened to consumer feedback on external display compatibility, and the MacBook Air M3 now allows you to connect two displays rather than just one, as was previously the case. However, as we noted in our assessment, while two monitors are being driven, the MacBook Air’s display cannot be utilized (the laptop must be closed).

Much of the criticism now being thrown at this latest take on the MacBook Air revolves around the base model only running with 8GB of system RAM – and urges on Apple to make it at least 16GB. It’s true that 8GB/256GB as an entry-level spec is looking unstable these days, but you can upgrade – at a cost (which some will argue Apple is pushing you to do at an extra premium).

We should emphasize that the MacBook Air M3 is an excellent notebook, as our assessments of the MacBook Air 15-inch and 13-inch models demonstrate, but there is still space for improvement.

read more
Technology

Can you actually meditate in VR? I tried Headspace XR at Meta’s London headquarters

I’m trying to breathe deeply, relax my shoulders, and follow the visual cues inside a pastel-colored landscape with an orange sunset. It was almost easy to forget that I was being closely monitored by many Meta and Headspace representatives, as if I were part of a laboratory experiment.

Trying to appear natural and calm while being watched and evaluated by unidentified observers? My mind forgot to be silent for a time, conjuring up images of myself in a police interrogation room, and I tried not to snort.

Meditation App Headspace, one of the top fitness and health applications, has announced Headspace XR, a “immersive playground” VR game with single-player and multiplayer modes based on meditation principles found in the app. And we got to visit Meta headquarters to try it out, ahead of launch

Headspace XR is a game developed by Meta and Nexus Studios for the Meta Quest 2, 3, and Meta Quest Pro headsets (some of the top VR headsets) that is advertised as an experience in which “users can move, play, meditate, or just explore with their friends”. The game is placed in a central hub universe with 13 locations to explore, each representing a distinct exercise aimed at promoting creativity, mindfulness, and positive thinking.

The game costs $29.99 in the United States and £22.99 in the United Kingdom, and it is now available via the Meta Quest store. We had an exclusive preview in London’s Meta HQ as we were writing this, and it was released today.

“We started with purpose,” Deborah Casswell, Nexus Studios’ senior creative director, told TechRadar. “We knew the Gen Z demographic we were creating this for was battling with their mental health. Headspace has always done an excellent job of reaching people where they are, so this provided an opportunity to reach a large number of individuals who are already using the Quest platform.

“In terms of creating it for VR, we took all of the positive stuff, such as social incentive, and approached it from the perspective of spending time with friends, as friendships are central to their identity. We’ve created a good environment where they can get together to feel better, stimulate talk about mental health, and participate in activities that can assist.”

Trying the game

I am a casual on-and-off mindfulness app user. I’ve used Headspace’s mobile app before, and I’ve previously written about my Calm membership. I’ve had some experience with mindfulness principles and how platforms like Headspace gamify and execute them via mobile apps. I’ve used VR in the past, but I’m not a heavy user these days.

I put on the headset, started the game, and found myself in a neutral shape-based environment filled with relaxing tonal pastel colors, predominantly blues, pinks, and yellows. At the center stood a tower comprised of numerous polyhedrons, stretching above the skies With practice, you could literally raise your head above the sky. A good, albeit apparent metaphor.

The sky was gorgeously depicted, and I appreciated how participatory the setting was: you could spend hours hurling colored bubbles (or “flow bursts”) at the walls and coloring them with a gratifying splat, or pressing your hand into the wall to gradually increase the spread of color.

With a little coaching, I was able to leave my mark on this lobby area and test a few games. One assignment required you to wave your hands or use a controller (I used controllers) to move energy balls around a room, moving them in tai chi-like motions to collect floating points. The result was similar to water bending in Avatar: The Last Airbender, with methodical, relaxing movements. The game was quite responsive, and it satisfied that itch in my brain. I appreciated it and felt as relaxed as I could be while wearing an unusual X-Men-style visor and being stared at by Meta and Headspace workers.

I also tried box breathing, a guided breathing exercise I had previously completed using the app. I breathed in for four counts, held for four, expelled for four, and then held at the “bottom” of my breath for another four. With the VR headset, I can utilize visual cues instead of verbal cues, with finished boxes fading into a sunset sky. This is all extremely clever stuff.

“Play is a fantastic wellbeing tool. You’re rarely more focused than while you’re playing. So, using the Meta Quest platform, we can use all of these gaming features to enhance our wellness experiences. What we concentrated on was helping you concentrate on one task at a time.” When I last used a virtual reality headset, I was constantly scanning the edges of my view for enemy snipers: this new use of immersion is a welcome change of pace.

VR mindfulness: Is it ever really viable?

When I first tried Headspace XR, I thought it was a cool way to incorporate meditation ideas into a video game. It was both enjoyable and enlightening, and I believe it was an effective approach to illustrate simple tactics and ideas, such as the power of play and certain breathing exercises, that can be used to boost your mood when you need it. That is frequently how meditation apps work, and I believe Headspace XR will be even more effective at guiding the user into a contemplative state thanks to VR’s natural immersiveness.

However, Headspace and Nexus Studios appear to be set on creating the VR experience a virtual space to relax in, transforming mindfulness into a social activity for Generation Z (not my words: Casswell emphasized designing the game with Gen Z in mind). Although there was a significant increase in these kind of warm shared experiences during the epidemic – I’m thinking of Animal Crossing on the Switch – there are a couple impediments to this. For starters, each companion must own a Meta Quest headset as well as a copy of the game, which costs $30 / £23 alone.

As a shared gaming experience, Headspace XR falls short: there’s satisfaction in playing without the constraints of response speed, strategy, or organization, but most groups of friends will need a meatier, more rewarding shared experience. The pastel-hued environment is lovely, yet it falls short of their goal of creating a safe, non-judgmental space. I realize Headspace intends to decrease the damage caused by over-stimulation, but as a game, I’m concerned it’s gone too far the other way.

Headspace is designed to be a challenge-free solo gaming experience, which is enjoyable for a time – but will it keep Meta Quest gamers coming back? I’m not certain that the mini-games are replayable. It’s fine to undertake short 10-minute (or even 20-minute) meditation sessions on an app, many of which can get repetitious, but having to log in and put on the headset adds another barrier to utilizing the game to create a meditation habit.

I enjoyed the experience, and as a casual mindfulness practitioner, I keep returning to the applications, but I don’t see myself donning a headset and passively playing with my friends. Perhaps I am just stuck in my millennial Younger readers will feel differently.

read more
Technology

Nvidia’s GeForce Now Day Pass lets you hire an RTX 4080 GPU-powered gaming PC for 24 hours

Nvidia has introduced a new option for gamers to access its GeForce Now streaming service in the form of Day Pass.

Team Green promised that this will happen with GeForce already in January, but the passes are already available.

Previously, you had to join up for a subscription to fully appreciate the capabilities of the cloud gaming service, as opposed to the limited gratis version, which has significant queue times (and now includes advertisements).

However, the Day Pass allows you to access the complete GeForce Now service, and even the top-tier RTX 4080-powered offering, for, well, a day, as you might expect, for a little investment.

How much? In the US, Nvidia charges $7.99 for an Ultimate Day Pass and $3.99 for a Priority Day Pass.

Those passes correlate to the existing Priority and Ultimate subscriptions (which are monthly, or you may join up for six months at a discount).

For those unfamiliar with the plans, Priority is GeForce Now’s standard option, providing 1080p cloud gaming at up to 60 frames per second (fps) with a maximum 6-hour session time. Ultimate ups the ante with an RTX 4080 cloud-based system capable of 4K gaming at up to 120 fps (or 240 fps if not running 4K) and a slightly longer 8-hour session duration.

Keep in mind that the quality of your streaming will, of course, be determined by the quality and speed of your internet connection.

However, it is worth noting that Nvidia states: “Day Passes are available in limited quantities each day, so grab one before the opportunity passes.”

Analysis: Hard day of gaming

This is a nice development for those considering putting their toes into the world of cloud gaming, as it allows you to test the waters for just one day.

Nvidia defines a day as 24 hours, thus those willing to go without much sleep will receive their money’s worth. (We wouldn’t recommend a 24-hour gaming marathon for a variety of reasons, particularly if you’re doing it for a good purpose, such as charity).

Being able to test the entire service in this manner for a few dollars (for the standard offering) is a fantastic idea, because no matter how much you research or read about how good GeForce Now may (or may not) be, there’s no substitute for actually running it on your own internet connection to see how it performs.

The Ultimate (RTX 4080) Day Pass may be too expensive for certain gamers.

Given that day passes are only accessible in ‘limited’ amounts on a daily basis, Nvidia may see this as a helpful way to unload excess gaming server resources.

Existing GeForce Now customers may see lengthier queue times if the Day Pass becomes popular. (And if wait times are extended, we can only image what that means for freeware users, who already require the patience of a gaming saint).

Remember that with GeForce Now, you are renting the hardware to play on, not the games themselves; you must own them (for example, on Steam or the Epic Games Store), and they must also be supported by Nvidia’s cloud service.

read more
Technology

Nvidia may have made its final GTX graphics card, but what affordable options are left from Team Green?

According to rumors, Nvidia has discontinued its cheap GTX 1650 and 1630 graphics cards, the final surviving GTX 16 Series products.

If this sounds similar, it’s because in December 2023, we learned that Nvidia was planning to discontinue production of these GTX GPUs in Q1 2024.

Again, according to the same source – the oft-cited Board Channels (in China), this move went ahead as scheduled.

VideoCardz discovered a new forum post claiming that Nvidia’s product roadmap indicates that these GPUs will be discontinued in Q1, and that no further chips for GTX 16 models would be made available or provided to graphics card manufacturers in the future.

This means that the GTX brand is dying, as no more will be produced; once the remaining GPUs on stores run out, the 16 Series and the GTX name will be discontinued. Starting now, all of Nvidia’s graphics cards will be RTX versions.

How long will it take for GTX 16 stock to run out? According to the source, the remaining supplies will be consumed in as little as a month, but possibly as many as three. So, in principle, if you don’t get your GTX 16 Series GPU by June 2024, you’re out of luck. (Or, if we’re lucky, some may be joking – we’ll talk about it further in the next part).


Analysis: Nvidia’s cheap GPUs get even more’meh’

Don’t be too concerned; even if this is true (and it could be), these GTX graphics cards are getting a little old (the GTX 1650, in instance, is five years old). Which is why the report is credible enough, and it is supported by the scenario, as evidenced by a quick look at stock levels in the United States. Newegg, for example, only has two GTX 1650 units in stock, while the GTX 1630 has all but vanished.

Also, don’t panic if you already own one of these GPUs: the discontinuation of GTX 16 Series graphics card sales does not imply that Nvidia will discontinue support for the cards. Nvidia will continue to support these GPUs in future driver versions for some time.

With the discontinuation of these GTX models, Nvidia’s budget option is now the RTX 3050, which includes a new variant with 6GB of VRAM (rather than 8GB) and a lower price tag – probably introduced with the discontinuation of the GTX 16 models in mind.

The RTX 3050 6GB is now only slightly more expensive than the GTX 1650 in the United States, however it isn’t great value for money and lacks gaming performance. The 8GB version of the RTX 3050 is significantly faster for gaming, as is the RTX 2060. (Incidentally, the latter is our rig’s ailing GPU – specifically, the RTX 2060 Super, which we really need to upgrade soon, but we’re feeling like waiting for RDNA 4, which should bring some serious mid-range awesomeness to the table).

If you’re in the market for a budget GPU, which means something incredibly affordable, there are superior options from AMD (such as the value-packed RX 6600), or even Intel’s Arc range, frankly.

read more
Technology

Apple’s MacBook Air M3 may send a message to Intel— We do AI PCs, too, and you haven’t seen nothing yet

Lost in today’s frenzy over a pair of brand-new M3 MacBook Air computers was a not-so-subtle tweak in product language that could signal Apple’s formal entry into the race to build an AI PC.

The press announcement for the new 13-inch and 15-inch MacBook Air ultraportables featuring the latest Apple silicon included two paragraphs saying that the MacBook Air is the “World’s Best Consumer Laptop for AI”. If you didn’t follow the computer industry as closely as I do, you could have dismissed that as some weirdly particular boasting or hyperbole on Apple’s part. I see things a little differently, however.

First, some history. Until 2020, almost all new Apple Macs, including MacBook Airs, had Intel processors. That year, Apple declared its aim to manufacture its own semiconductors and eventually replace all Intel CPUs with its custom system on a chip (SoC), which became known as Apple Silicon. The first such chip, the M1, appeared on the popular MacBook Air M1 (now obsolete).

The march of Silicon

Apple eventually delivered on its pledge, replacing all Intel technology with its own after numerous revisions and upgrades. Intel still controls the majority of the Windows PC market, but in certain ways, Apple is seen as the system pioneer, developing SoCs that are faster and more efficient than anything Intel produces.

Intel’s big plan for countering that perception, and exciting people looking for other ARM-based solutions that can run Windows as fast and efficiently as something like Apple silicon, is to revise its entire chip lineup with Intel Core Ultra processors and, more importantly, the “AI PC.”

The AI component is provided by the Neural Processing Unit (NPU), which will function as an AI coprocessor alongside Intel Core Ultras. Intel has the endorsement of almost all major Windows PC makers, including, probably most importantly, Microsoft. The Redmond software behemoth is currently launching a full-court press against Copilot. The generative AI, formerly Bing AI chat, that it produced with intelligence from OpenAI, appears to be everywhere, and on AI PCs it will appear as a Copilot keyboard button.

What any of us will do with a “AI PC” is uncertain, but we will be discussing these systems throughout the summer and into the Northern Hemisphere’s back-to-school shopping season.

Apple, by some estimates, controls only 17% of the PC market. Even if people believe Apple’s silicon is superior and macOS is a better platform than Windows, they cannot afford to sit back and watch Intel and Microsoft innovate and sell themselves to even higher PC market heights.

We understand AI

This returns us to the “World’s Best Laptop for Consumer AI.” Apple has a point, however. It has been doing AI for a long time, beginning with the addition of its first Neural Engine to the iPhone 8 via the A11 Bionic CPU. This early onboard machine learning technology is directly comparable to the M3’s 16-core Neural engine.

Apple has made no secret of its silicon’s AI capabilities, but it has never put them front and center. That is all changing now.

The corporation has no choice. Apple’s challenge stems from the fact that, unlike Microsoft, OpenAI (ChatGPT), and Google (Gemini), it does not have a generative AI product. Siri is not generative; it cannot generate poems, presentations, or artwork. That has hampered Apple’s attempts to look ahead of the curve.

In the release, Apple specifically mentions Large Language Models (LLMs): “Combined with the unified memory architecture of Apple silicon, MacBook Air can also run optimized AI models, including large language models (LLMs) and diffusion models for image generation locally with great performance.”

The Shape of Things to Come

Running locally, without relying on potentially less secure or slower cloud assistance, has always been Apple’s secret AI sauce. However, Apple is well aware that it cannot win this game unless it allows for cloud-based generative AI.

During demonstrations, I observed the MacBook Air M3 doing both cloud-based Microsoft CopIlot prompts and local generative tasks using programs like as Luminar Neo, which can take a fuzzy midnight shot and add generative information to make the image useful. In all cases, their performance appeared almost instantaneous and was easily comparable to that of a cloud-based generative AI.

The objective of demonstrating these apps and making these announcements, however, is not simply to inform the world that Apple also does Gen AI. I believe it is preparing us for what is to come.

It is not only about new products and press releases. Apple CEO Tim Cook currently takes practically every opportunity to make huge promises about Generative AI (remember when he hyped “AR”? What a difference a letter makes.

Cook understands that Apple technology is more than ready for Large Language Models and Generative AI for images and text, and we’ll see Apple take advantage of all that power beginning with WWDC 2024 in June.

That’s Apple’s message about AI: You haven’t seen nothing yet.

read more
Technology

AI is taking over your smart TV. Here’s why

AI is revolutionizing every aspect of consumer technology, from your laptop’s web browser to smartphones and smart speakers. Such advances have received a lot of attention, but AI is also silently working under the hood of the greatest TVs, where it can considerably improve the overall picture quality.

At the recent CES 2024, TV manufacturers ranging from Samsung from LG to TCL and Hisense emphasized the AI capabilities of the processors powering their televisions. Such capabilities aid in activities as simple as noise reduction to more complicated ones like recognizing and separating objects of interest in photographs and then dynamically increasing contrast and color for enhanced visual impact.

AI is also used in TVs to enhance HDR tone mapping in movies and shows with high dynamic range, as well as to “re-master” older content that was not originally produced in HDR format. It is often used to improve image detail and sharpness, mostly by accessing a database of pre-existing images and using it as a foundation for subsequent processing.

New AI horizons

In 2024, AI will propel television technology to new heights, probably for the better. LG has revealed a function for its best OLED TVs called AI Director Processing, which can assess a director’s intended color scheme for each shot in a film and “enhance the color expression” accordingly. That appears to be a daring AI move by LG, and we’ll be closely monitoring this new feature to see how well it works until we get our hands on the company’s 2024 models.

Samsung, a corporation that never shies away from adopting AI, has revealed a new feature dubbed AI Motion Enhancer Pro for its 2024 8K televisions. This employs artificial intelligence to determine the sort of ball used in various sports and effectively replace it onscreen on a frame-by-frame basis, not just reducing but completely eliminating motion blur. I saw a demo of this technology in action, and the visual contrast between a ball in motion onscreen with AI Motion Enhancer Pro enabled and a ball with the function turned off was night and day.

AI and 8K TVs: Rounding out the picture

Samsung is using AI-based processing on its 8K TVs to improve the overall picture, not just the balls. The company’s 2024 8K TVs have a new CPU, the NQ8 AI Gen 3. According to Samsung, this includes an on-device AI engine (Neural Processing Unit) that is twice as fast and has eight times as many neural networks as the one used in last year’s 8K televisions.

Having so much computing capability onboard mostly aids picture upscaling. A 4K TV has 8.3 million pixels on its screen, but an 8K TV has 33 million pixels. Given that the majority of content watched on 8K TVs is in 4K or ordinary HD quality, millions of new pixels are created to fill the ultra high-definition display. The 8K AI Upscaling Pro feature in Samsung’s 8K TVs, like the image processing on 4K TVs, creates pixels by referencing a visual database, and it also uses “Quantum Super Resolution” to process images on a frame-by-frame basis to ensure lines look smooth and fine details do not appear overly enhanced.

AI and 4K TVs: now more important than ever

A significant advantage of the finest streaming services is that many now offer movies and shows in 4K resolution with Dolby Vision HDR and Dolby Atmos audio. As long as your home has a high-speed internet connection and reliable Wi-Fi, you could experience stunning 4K visual quality and immersive sound.

That benefit is now elusive. Major streaming services such as Netflix and Max have recently introduced pricing levels that require you to pay more for 4K/HDR video and Dolby Atmos audio. For some 4K TV owners, the shift has rendered those services too expensive, prompting them to downgrade to a less expensive Standard plan with HD-resolution programming or even a basic ad-supported tier.

Unless you intend to stop streaming totally and instead purchase one of the finest 4K Blu-ray players, this new development has made 4K TVs’ upscaling capabilities more vital than ever.

TV brands who use AI for upscaling have an obvious edge here because the same techniques used to make 8K images appear excellent also work for HD-to-4K conversion. However, not all TVs support technically competent 4K upscaling, even if their processors apply AI for other sorts of image enhancement.

The general processors included in many smart TVs are now better than ever, with AI-powered features like real-time scene and object identification, detail optimization, and other visual enhancements. That’s one of the main reasons why today’s low-cost big-screen TVs look far superior to versions from a few years ago. However, generic solutions cannot handle everything, and this truth will become more evident as we progressively transition to 8K resolution and considerably larger screen sizes (hello, 98-inch TVs).

When it comes to televisions, there is nothing to fear from artificial intelligence. In terms of image quality, more AI is better. However, not every artificial intelligence employed in televisions is equal. As TVs evolve from flat panels on stands to room-filling video walls, bespoke AI-based picture processing and upscaling will give obvious and indisputable visual enhancement.

read more
Technology

The Nvidia GeForce RTX 5090 could be up to 70% quicker than the 4090, but its greatest chips may be kept for AI

The Nvidia GeForce RTX 5090 graphics card has been the topic of numerous rumors since at least last year, and the most recent provides two new startling information on what we may anticipate from the potential next-generation graphics card.

The RTX 5090 will most likely be built on the Nvidia Blackwell architecture and, according to the tech YouTube channel Moore’s Law is Dead, might be up to 70% quicker than the current-generation RTX 4090 graphics card. This is a significant performance boost that would make any of the greatest PC games on the market look like a piece of cake.

Previous speculations suggested that the RTX 5090 will perform about twice as fast as the RTX 4090, thus this is yet another piece of evidence to support that theory. However, as with all reports, some skepticism is appropriate until we can measure the performance ourselves.

This performance gain is expected to come from as many as 192 streaming multiprocessors in the RTX 5090 (a 50% increase over the RTX 4090’s 128), giving the card 24,576 CUDA cores, 192 ray tracing cores, and 768 tensor cores. In other words, if any of these reports are correct, this card will be a true giant.

However, the increase in speed between the cards would come at a high cost, literally. According to the same claim, the 5090 might cost between $2,000 and $2,500, with other probable prices including a $1,000 RTX 5080 card, a $700 RTX 5070 card, a $400 RTX 5060 card, and a cheap RTX 5050 Ti card for about $300.

Regardless of performance and pricing, they may not be the greatest cards Nvidia has to offer. Another prediction is that the card will not feature a fully enabled die since Nvidia will almost surely save its most powerful cards for the thriving AI sector, which propelled the tech company into the trillion-dollar profit stratosphere last year.

Nvidia is largely uncontested

It will be exciting to observe how Nvidia’s emphasis on AI this generation will influence graphic card development in the coming generation. We may receive some obscenely powerful cards, but its die will not be completely enabled for the benefit of the AI market over gamers, which is simply mind-boggling.

And, despite this, Team Green can charge whatever they want for this card if AMD fails to come up and provide a graphics card of comparable quality. And with predictions that AMD is not even going to make a bid for the premium GPU market with its next-gen RDNA 4 architecture, it doesn’t appear like anything will be able to limit the escalating pricing of Nvidia’s graphics cards.

read more
Technology

I seen the world’s most advanced robot, and it’s uncanny

Cast your mind back to 2023, and you might recall seeing Ameca, the so-called world’s most advanced robot, appear on UK TV’s This Morning and create headlines everywhere. Ameca is back, with a second-generation version unveiled at MWC 2024, featuring even more lifelike facial expressions.

I initially became aware of Ameca’s presence at the show when I noticed a crowd of gawking MWC attendees fixated on something. Naturally, I walked over to investigate, and there I saw Ameca in all its semi-skeletal splendor, answering questions thrown at it by MWC employees – and leaving me with the eerie impression that I’d wandered onto the pre-production set of Ex Machina.

The robot use generative AI to respond to questions in real time, ranging from basic ones like ‘how old are you?’ to sillier ones like ‘can you dance?’ – reader, Ameca can dance, and probably better than the typical nightclub goer.

All in good fun, but when Ameca was questioned if it had feelings, the demo got very astounding. It responded with a variety of facial emotions, all of which appeared to be incredibly realistic, and I could imagine robots becoming a part of our future.

Ameca fumbled over a few questions, partly because it was attempting to keep up with a bombardment of prompts and requests. But its knowledge of common language was rather good, and the phrases it used in combination with its replies were genuine enough that you didn’t feel like you were talking to a jumble of cables, chips, and servo motors. Similarly, there is still some distance to go until the ‘uncanny valley’ sensation is gone.

Nonetheless, witnessing Ameca operate in the (robot) flesh is astounding and not as disturbing as one might imagine – sure, one can’t help but think of the mediocre I, Robot film, but that feeling is quickly forgotten as one watches Ameca work.

Ameca’s designer, UK-based Engineering Arts, does not aim for robots to replace humans, which is a comfort given the current concern that generative AI may displace jobs. Rather, the corporation aims to use it to improve robotics research. However, it believes Ameca will eventually be used in the real world as a robotic receptionist helper or social care assistant alongside humans.

Such a scenario is probably a long way off, but Ameca will act as a platform for AI technology, potentially leading to smarter robots that are genuinely valuable to our society. Either that, or we will all have robot butlers before long.

read more
Technology

Wi-Fi 7: Everything You Should Know About the New Wireless Standard

Wi-Fi 7 represents the next generation of wireless communications. It’s already here and provides lightning-fast connectivity in residential, business, and commercial settings. Not only that, but it has several new features that should appeal to anyone trying to improve their connection.

Wi-Fi 7’s technology seems truly cutting-edge, and there are numerous compelling reasons to upgrade. However, there are some drawbacks to consider, most notably the cost of entrance; you’ll have to pay for the next-generation connection as well as one of the top Wi-Fi routers capable of handling it.

The Wi-Fi Alliance has spoken extensively about the benefits of Wi-Fi 7, but below is a synopsis of what you may expect if you make the jump.

Wi-Fi 7: Cut to the Chase

  • What is it? Wi-Fi 7 is a wireless technology that utilizes the 6GHz frequency to increase internet speeds even more.
  • When does it come out? It’s presently available in the United States, the United Kingdom, Australia, Japan, and Mexico. However, it requires regulatory approval in many other nations.
  • What will be the cost? The cost will vary depending on the router you purchase. Your internet service provider will most likely give the lowest solutions, although the best Wi-Fi 7 mesh routers may cost more than $1,000 / ₹105095.15 (about AU$1,500).

WI-FI 7 RELEASED DATE

Wi-Fi 7 was formally released on January 8, 2024, when the Wi-Fi Alliance launched its Wi-Fi Certified 7 program, but it will still take years for many people to adapt to it. According to the Wi-Fi Alliance, 233 million devices are predicted to enter the market in 2024, increasing to 2.1 billion by 2028.

According to the Alliance, smartphones, PCs, tablets, and access points (APs) would be among the first to embrace Wi-Fi 7, followed by customer premises equipment (CPE) and augmented and virtual reality (AR/VR) equipment.

Wifi 7 Routers and Pricing

The initial wave of Wi-Fi 7 routers are priced very differently depending on the manufacturer and model.

Some reasonably priced routers, such as the TP-Link Archer BE800, cost as low as $300, whilst others cost $1,000 or more. There are also leased routers that internet companies frequently provide, however these may incur additional monthly fees on your internet subscription.

In between, there’s the Netgear Nighthawk RS700S, which costs $699 / ₹83971.03 / AU$1,499; learn more about it in our Netgear Nighthawk RS700S review. Spoiler alert: we love it. There’s also the Amazon eero Max 7 router, which costs $599 / ₹62952.00 (not available in Australia).

Meanwhile, Acer unveiled two Predator Connect gaming routers at CES 2024. The Predator Connect T7 supports Wi-Fi 7 tri-band, which expands home coverage.

WI-FI 7: SPECS AND PERFORMANCE

The Wi-Fi Alliance has revealed the specifications and performance of Wi-Fi 7. The first is the introduction of 320MHz channels, which are twice as large as 160MHz channels. However, while their increased capacity results in extraordinarily high speeds, they can only be used in nations that allow access to the 6GHz frequency. So far, those countries include the United States, the United Kingdom, Australia, Japan, and Mexico, with a complete list available here.

There’s also MLO (Multi-Link Operation). This enables networked devices to transmit and receive data over multiple bands (mostly 2.4GHz, 5GHz, and 6GHz) simultaneously. It leads to more consistent service, higher latency, and higher throughput. This is very significant for VR headsets. Receiving a feed on a 6GHz band While providing tracking information on the 5GHz band would provide best performance.

Another possible benefit of VR is predictable latency, which informs devices when to expect connection interruptions and may result in better tracking. There is also 4K QAM (Quadrature Amplitude Modulation), which transmits 20% more data than the current 1024 QAM standard.

Finally, Wi-Fi 7 is backwards compatible with existing Wi-Fi kinds, allowing older computers to connect to the new standard. However, they may not be able to use the greater speeds that Wi-Fi 7 provides, so double-check your devices’ Wi-Fi capabilities before purchasing a new router.

read more
Technology

Microsoft report claims. US opponents are preparing for an AI war

In a new briefing released this week, software giant Microsoft argues that US adversaries such as Iran, Russia, and North Korea are poised to ramp up their cyberwar operations with modern generative AI. The problem is exacerbated, it argues, by a persistent shortage of experienced cybersecurity professionals. According to the briefing, a 2023 ISC2 Cybersecurity Workforce Study estimates that almost 4 million additional support people will be required to deal with the impending attack. Microsoft’s own tests in 2023 revealed a significant increase in password attacks over two years, from 579 per second to more than 4000 per second.

The company’s answer has been the launch of CoPilot For Security. This AI solution is intended to detect, identify, and stop these threats faster and more efficiently than humans. For example, a recent test found that using generative AI enabled security analysts of all levels of competence to function 44% more accurately and 26% faster when dealing with all types of threats. Eighty-six percent claimed AI increased their productivity and lowered the work required to perform their tasks.

Unfortunately, as the corporation admits, the application of AI is not limited to nice guys. The tremendous rise in technology is resulting in an arms race, as threat actors seek to use the new instruments to cause as much damage as possible. As a result, this threat briefing has been issued to warn of an impending escalation. The briefing reveals that OpenAI and Microsoft are working together to detect and counter these rogue actors and their techniques when they emerge in force.

Generative AI has had a pervasive impact on cyberattacks. Darktrace researchers discovered a 135% spike in email-based so-called ‘new cyber attacks’ between January and February 2023, coinciding with the broad deployment of ChatGPT. Furthermore, a rise in linguistically complex phishing assaults with an increased number of words, longer sentences, and more punctuation was seen. This resulted in a 52% surge in email account takeover attempts, with attackers posing as the IT team at victims’ firms.

The paper identifies three primary priority areas that are expected to demand growing amounts of AI in the near future. Improved reconnaissance of targets and weaknesses, enhanced malware scripting using advanced AI coders, and assistance with learning and planning. Because of the massive computing resources required, nation states will very probably be among the first to adopt the technology.

Several such cyberthreat outfits are expressly identified. Strontium (aka APT28) is a very active cyber-espionage gang that has been working out of Russia for the past two decades. It goes by a variety of names and is expected to significantly enhance its use of powerful AI capabilities as they become available.

North Korea also has a significant cyber-espionage presence. According to some reports, over 7000 workers have been executing ongoing threat operations against the West for decades, with activity increasing by 300% since 2017. The Velvet Chollima, also known as the Emerald Sleet operation, primarily targets academic and non-governmental organizations. Artificial intelligence is increasingly being utilized to optimize phishing tactics and evaluate vulnerabilities.

The briefing focuses on two more important participants in the global cyberwar arena: Iran and China. These two countries have also been boosting their usage of language learning models (LLMs), largely to identify research opportunities and potential areas of assault. In addition to these geopolitical threats, the Microsoft briefing discusses the rising use of AI in more traditional criminal activities such as ransomware, fraud (particularly the use of voice cloning), email phishing, and general identity manipulation.

As the conflict heats up, we can expect Microsoft and its partners, such as OpenAI, to build an increasingly sophisticated set of tools for threat detection, behavioral analytics, and other techniques of rapidly and decisively identifying attacks.

According to the research, “Microsoft anticipates that AI will evolve social engineering tactics, creating more sophisticated attacks including deepfakes and voice cloning…prevention is key to combating all cyberthreats, whether traditional or AI-enabled.”

read more