close

Applications

Applications

As Microsoft continues its ‘death by a thousand cuts’ for Control Panel, Windows 11 adds additional functions to the Settings app

Microsoft is gradually migrating the functionality of the old Control Panel, which is still there in Windows 11, to the Settings app, and several new capabilities have recently made the transition – at least in beta editions of the OS.

Windows Latest discovered this new behavior in terms of moving things around, which should assist Windows 11 users when the 24H2 update is issued later this year.

One change is that the Power & Battery page in the Settings app now allows laptop users to modify ‘Lid, power, and sleep button controls’ (which are currently available in the Control page, as previously mentioned). This lets you to decide what occurs when you close the notebook lid or click the power button on (the device sleep, hibernate, shut down – or do nothing).

Desktop PC users have power options, but they are visibly different – there is no lid to close in this situation, and the hibernate option is not available.

Microsoft is also working on the Display part of Settings, and has included Color Management features that allow you to adjust your Color Profiles.

Another minor change was discovered by Windows Latest in the Storage Pool panel, where there is a new option to ‘Delete this Storage pool’ that was previously only available through the old Control Panel.


Analysis: The Control Panel’s Slow Slide into Oblivion

All of these are pretty modest improvements – to be honest, the power-related adjustments are more essential – but they all contribute to the Settings app finally taking over all of the functions of the old Control Panel. It’s simply that Microsoft is taking its time rolling out these kinds of updates to Windows 11 (and, for that matter, Windows 10).

read more
Applications

Galaxy AI is currently available on several phones, including the Samsung Galaxy S22 and Z Flip 4

We’ve known for a while that Samsung’s AI features would probably come to the Samsung Galaxy S22 and some other older phones, and now that’s happening, with the company’s One UI 6.1 update reportedly rolling out to the Samsung Galaxy S22 series, the Galaxy S21 series, the Samsung Galaxy Z Flip 4, the Samsung Galaxy Z Fold 4, the Galaxy Z Fold 3, and the Galaxy Z Flip 3.

So yet, these changes, identified by SamMobile, are only available in South Korea, but additional regions are likely to receive them soon. To check for and download the update, go to Settings > Software update > Download and install; however, you will most likely receive a message when it becomes available.

What this update brings varies depending on the phone you’re using. According to SamMobile, the Samsung Galaxy S22 series, Galaxy Z Fold 4, and Samsung Galaxy Z Flip 4 will have the same AI characteristics as the Samsung Galaxy S24.

This features AI-generated backgrounds, Browsing Assist, Chat Assist, Circle to Search, Edit Suggestion, Generic Edit, Interpreter Mode, Live Translate, Note Assist, and Transcript Assist.

Only circle to search

The Samsung Galaxy S21 series, Galaxy Z Fold 3, and Galaxy Z Flip 3 are the only older devices that support Circle to Search.

This constraint is most likely due to the fact that these older Galaxy handsets lack the processing capacity to smoothly integrate more demanding AI capabilities, but it is also possible that Samsung just wishes to save its most recent features for newer phones.

You can learn more about these new AI tools in our comprehensive guide to Galaxy AI, while our Galaxy AI compatibility guide (which will be updated with this new information) provides a device-by-device breakdown of which features are available on which devices.

But, suffice to say, this is a massive improvement, at least for the devices that will receive Samsung’s full array of AI features. We assume that these are the last already available phones to have Galaxy AI compatibility, but given that we expected these features to be reserved for Samsung’s latest handsets, we’re pleasantly pleased by the scale of the release.

read more
Applications

The latest macOS Sonoma update reportedly breaks several USB hubs

According to reports on the web, updating to macOS Sonoma 14.4 is causing some USB hubs to stop working – however it’s unknown how widespread the problem is or which devices are affected.

The news was first shared by AppleInsider readers, but there has also been some discussion on Reddit, Apple support forums, and MacRumors. So far, it appears that only USB hubs incorporated into monitors are affected by the flaw, which includes Dell, Samsung, and Gigabyte models.

According to the number of responses to the article and threads referenced above, many individuals are experiencing this problem. However, issue does not appear to affect everyone using a macOS Sonoma 14.4 computer and a monitor USB hub.

Apple has not commented on the issue, and is unlikely to do so unless it becomes common. The 14.4 software update began going out last week, delivering new emojis and bug fixes, albeit it may have introduced some of its own.

Can you fix it?

Users experiencing problems with USB hubs are attempting a variety of troubleshooting techniques. There does not appear to be a single method that works for all sorts of displays and USB connections.

For some, turning everything off and then back on seemed to work. Others have claimed that going to the Privacy & Security page in macOS System Settings, changing the Allow accessories to connect option to Ask every time, and rebooting resolves the issue.

Reading between the lines, there may be something wrong with the way macOS’sees’ the USB hub and the devices connected to it as peripherals, but there are a lot of steps along that chain – some users have discovered that just switching to a different USB cable helps.

We’ll have to wait and see if Apple provides a solution for those affected. Of course, if workaround methods have already been discovered, any essential bug patches for the problem would most likely be discreetly integrated into the macOS Sonoma 14.5 release.

read more
Applications

Forget Sora, here is the AI film that will blow your mind—and possibly worry you

Humanoid robotic development has advanced at a snail’s pace for the better part of two decades, but fast acceleration is now occurring courtesy to a collaboration by Figure AI and OpenAI, resulting in the most gorgeous actual humanoid robot movie I’ve ever seen.

On Wednesday, Figure AI, a startup robotics firm, released a video update (see below) of their Figure 01 robot running a new Visual Language Model (VLM), which has somehow converted the bot from a somewhat dull automaton into a full-fledged sci-fi bot with capabilities approaching C-3PO.

Figure 01 is shown standing behind a table with a plate, an apple, and a cup. There is a drainer on the left. A person stands in front of the robot and says, “Figure 01, what do you see right now?”

After a few seconds, Figure 01 responds in a wonderfully human-sounding voice (there is no face, just an animated light that moves in tune with the speech), explaining everything on the table as well as the man standing in front of it.

“That’s cool,” I thought.

Then the man says, “Hey, can I have something to eat?”

Figure 01 says, “Sure thing,” and then picks up the apple with a deft flourish of fluid movement and hands it to the guy.

“Woah,” I exclaimed.

The man then empties some crumpled garbage from a bin in front of Figure 01, asking, “Can you explain why you did what you just did while picking up this trash?”

Figure 01 spends no time expressing its logic as it places the paper back in the bin. “So, I gave you the apple because it’s the only edible item I could provide you with from the table.”

I thought, “This can’t be real.”

It is, nevertheless, according to Figure AI.

Speech-to-speech

According to the business, Figure 01 uses “speech-to-speech” reasoning to understand images and texts, relying on a whole vocal interaction to build its responses. This differs from, say, OpenAI’s GPT-4, which focuses on written cues.

It also employs what the manufacturer calls “learned low-level bimanual manipulation.” To control movement, the system uses precise picture calibrations (down to the pixel level) in conjunction with its neural network. “These networks take in onboard images at 10hz, and generate 24-DOF actions (wrist poses and finger joint angles) at 200hz,” the company said in a press release.

The company states that all behavior in the movie is based on system learning and is not teleoperated, implying that there is no one behind the scenes puppeteering Figure 1.

Without seeing Figure 01 in person and asking my own questions, it is difficult to corroborate these allegations. It is possible that Figure 01 has ran this code before. It could have been the 100th time, which would explain its speed and smoothness.

Or maybe this is 100% true, in which case, amazing. Just wow.

read more
Applications

Microsoft plans to make Copilot function like a ‘regular’ program in Windows 11

Windows 11 will have a significant update to the Copilot interface, or at least this is what is being tested.

Microsoft has included the option to liberate Copilot from the shackles that tether the AI assistant to the right-hand side of the screen in Windows 11 preview build 26080 (available in both Canary and Dev channels).

The Copilot panel shows on the right by default, and you can’t change that.

With this feature, you can now undock Copilot and have the AI in a standard app window that can be moved and resized as needed on the desktop. In other words, you get a lot more control over where Copilot appears.

In this preview build, more users will gain access to Copilot’s new ability to change Windows 11 settings. That functionality was previously offered to Canary testers, but it is now being extended to more of them, as well as Windows Insiders in the Dev channel.

The additional capabilities include having the AI assistant empty the Recycle Bin, enabling Live Captions, and Voice Access.


Analysis: Under the hood tinkering

Not all testers in the aforementioned channels will be able to fully liberate Copilot and allow the AI to roam the desktop for the time being. Microsoft says it is just getting started with the distribution, which will first be limited to users in the Canary channel. A wider deployment will follow, with Microsoft soliciting feedback as it goes and tweaking things based on what it learns from Windows 11 testers, no doubt.

As indicated in the blog post, certain ‘under-the-hood enhancements’ are also on the way for Copilot, but Microsoft has yet to reveal what. We can only presume that this is about performance, given it appears to be the most obvious way that fiddling in the background could enhance things with Copilot. (Perhaps to to provide a smooth rotation of the undocked panel for the AI, even).

read more
Applications

Copilot for Security is not an oxymoron—it’s a possible game changer for security-starved firms

Consider this: you’re a new security operations coordinator at a huge corporation that is dealing with dozens of ransomware attacks every day. You must assess, comprehend, and create a threat defense strategy on your first day.

Naturally, Microsoft believes that generative AI may help a lot here, and now, after a year of beta testing, Microsoft is formally launching Copilot for Security, a platform that could make that first day go much more easily.

In some ways, Copilot for Security (formerly known as ‘Security Copilot’) is similar to a customized version of the Copilot generative AI (based on GPT-4) found in Windows, Microsoft 365, and the increasingly popular mobile app, but with enterprise-level security at its core.

When it comes to security, businesses of all sizes require all the assistance they can get. According to Microsoft, there are 4,000 password attacks every second and 300 distinct nation-state crime actors. According to the business, one of these attackers can obtain complete access to your data within 72 minutes of someone in your organization clicking on a phishing link. These attacks cost corporations trillions of dollars each year.

During demos, I was shown how Microsoft Copilot for Security functions as an intense and ultra-fast security consultant, able to scan through complex hash files and scripts to determine their true intent and swiftly identify both known dangers and things that behave like existing threats. Microsoft argues that employing such a service will help to address the security personnel talent gap.

The platform, which will be paid based on usage and the number of security compute units used (Microsoft refers to this as a “pay-as-you-go” model), is clearly not a do-er. At this time, it will not delete or block any suspicious files or emails. Rather, it seeks to explain, guide, and recommend. Furthermore, because it’s a prompt-based system, you can ask it specific queries Its analysis. If Greg in IT is discovered downloading or modifying hundreds of files, you might request more information about his activity.

Microsoft Copilot for Security is intended to interface with Microsoft products, although it can also function with a variety of plugins.

It can also assess other generative AI platforms and detect when employees begin exchanging sensitive, private, or even encrypted company information with these chatbots. If you’ve configured rights to prevent such files from being shared with specific third-party chatbots, you may apply that rule, and Copilot for Security will recognize the file security, add a ‘confidential’ label, and automatically block sharing.

The advantage of using an AI is that you can examine a threat using normal language rather than sifting through menus for the appropriate tool or action. It becomes a two-way street, with a sophisticated security-aware technology that understands the context of your communication and can delve in and guide you in real time.

Emphasis on aid

Despite all of this research and suggestions, Microsoft Copilot for Security does not take action and instead relies on Windows Defender or other security solutions for mitigation.

Microsoft says that Copilot for Security can help practically any security expert become more effective. The company has been beta-testing the platform for a year, and the initial results are promising. The company discovered that Copilot for Security helped newcomers to the field be 26% faster and 34% more accurate in their threat assessments, while experienced users were 2% faster and 7% more productive.

More crucially, Microsoft asserts that the approach is 46% more accurate than without using it for security summarization and incident analysis.

Copilot for Security will be generally available on April 1, and this is no joke.

read more
Applications

Google Gemini’s new Calendar skills bring it one step closer to becoming the perfect personal assistant

Gemini, Google’s new family of artificial intelligence (AI) generative models, will soon be able to access Google Calendar events from Android phones.

According to 9to5Google, Calendar events were on Gemini Experiences Senior Director of Product Management at Google Jack Krawczyk’s “things to fix ASAP” list of what Google would work to add to Gemini to make it a more capable digital assistant.

Users of the Gemini app for Android devices may now anticipate Gemini to reply to voice or text requests such as “Show me my calendar” and “Do I have any upcoming calendar events?”. When 9to5Google tried this the week before, Gemini responded that it couldn’t fulfill those types of requests and queries, which was especially notable given that those types of requests are rather prevalent among competing (non-AI) digital assistants such as Siri or Google Assistant. However, when the same prompts were attempted this week, Gemini launched the Google Calendar app and completed the tasks. It appears that if users would like to enter a new event using Gemini, they must tell it something like “Add an event to my calendar,” to which it Should then prompt the user to enter the information manually using voice commands.

Going all-in on Gemini

Google is definitely making headway on establishing Gemini as its exclusive all-in-one AI offering (which will eventually replace Google Assistant). It has a long way to go before it can accomplish that, with users requesting features such as the ability to play music or amend their shopping lists through Gemini. Another big barrier for Gemini to overcome if it wishes to gain popularity is that it is only available in the United States for the moment.

The competition for the finest AI assistant has recently heated up between Microsoft with Copilot, Google with Gemini, and Amazon with Alexa. Google has lately made significant gains in its ability to compress larger Gemini models so that they can run on mobile devices. These more complicated models appear to have the potential to significantly improve Gemini’s capabilities. Google Assistant is well-known, and this adds another feather to Google’s hat. I’m hesitant to gamble on any of these digital AI helpers individually, but if Google keeps up this pace with Gemini, I believe its chances are strong.

read more
Applications

What is Google Gemini? Everything you should know about Google’s next-generation AI

Until recently, OpenAI was the dominant force in artificial intelligence (AI) and chatbots, with its GPT-4 large language model (LLM) powering ChatGPT (as well as Microsoft’s Copilot) and taking the globe by storm. The corporation took an early lead, and everyone else has been playing catch-up ever since.

However, OpenAI now faces a new rival in the form of Google Gemini. This newcomer debuted in February 2024 (after being revealed at the end of 2023) and quickly caused a stir in the AI community.

Is it enough to beat GPT-4? What can it do now, and what about in the future? And how do you use Gemini? We dug deep into the world of Gemini to find answers to all of these questions and more. If you’re interested in Google’s latest AI developments, here is the place to be.

WHAT IS THE GOOGLE GEMINI?

Gemini is Google’s most recent large language model (LLM). What is an LLM? It’s the system that powers the AI tools you’ve probably seen and used on the internet. As an example, GPT-4 powers ChatGPT Plus is OpenAI’s sophisticated paid-for chatbot.

Gemini, however, is more than just an AI model; it also serves as the new name and identity for the Bard chatbot. Yes, Bard is no more and has been completely replaced by Gemini. Essentially, Google has simplified things by naming both the underlying technology and the chatbot itself Gemini. Furthermore, there is a free Gemini app for Android, and Gemini can replace Google Assistant on your Android phone if you choose. On iOS, Gemini is present within the Google app.

On top of that, Google has relaunched its Duet AI service for enterprises as Gemini for Workspace, which includes a slew of productivity-related features.

The third twist is that, in addition to the basic (free) version of Gemini for consumers, there is a subscription option for the AI called Gemini Advanced. This paid solution is based on a more powerful LLM known as Gemini Ultra, and users with the Google One AI Premium membership receive additional benefits from using this model.

To summarize, all Google’s AI properties are now under the Gemini umbrella to simplify things, whether it’s AI for consumers or enterprises, and whether you access Gemini via the web, the assistant, or app on your smartphone.

What can GEMINI do?

The short answer to this question is “a lot.” However, you are probably expecting us to go into further detail.

As we’ve just discussed, Gemini is a broad umbrella for a wide range of AI capabilities and functionality given through several channels.

Google noted in a news release when Gemini was originally announced that the AI is a multimodal tool. In other words, it can work with a variety of input and output formats, including text, code, audio, photos, and videos. This offers it a great deal of versatility to accomplish a variety of activities.

However, Google has implemented two distinct LLMs using their AI. The free version is powered by Gemini 1.0 Pro, while the subscription AI (Gemini Advanced) is powered by Gemini 1.0 Ultra. And, yes, it has not gone unnoticed that the simplification of bringing everything under the Gemini brand has its own set of comical complexities and confusion.

All you need to know is that the free version, Gemini Pro, is considerably simpler, less accurate, and lacks the inventiveness and depth of the expensive Gemini Ultra LLM.

So, what can Gemini Pro do? It can answer basic queries, summarize text, and generate images. Gemini also integrates with other Google services, like Gmail, Google Maps, and YouTube. So if you ask it for sightseeing recommendations, it will display them in Google Maps.

There are some useful features, and those with Android devices may receive even more mileage for free with the Gemini app. As previously said, this can replace Google Assistant on an Android smartphone, executing queries and working its AI magic through other Google services. You do not have to replace Assistant with Gemini is also an option, and you can utilize the Gemini app whenever you want the AI to step in and aid.

That decision may cause you to ask if Gemini is a suitable replacement for Google Assistant. We discussed this extensively in our hands-on with Gemini on Android, and there are undoubtedly limitations to the AI. Gemini is noticeably slower than Google Assistant, and the AI still contains problems. For example, it is meant to be able to interact with images, although this capability remains glitchy (at the time of writing).

However, the initial Gemini on Android experience was far less stable than it is today. For example, there were some basic errors with the interface (although a big issue was promptly resolved). Another example comes to mind: Gemini was supposed to deal with smart home gadgets, but failed task when we first tried it, but this functionality has recently improved and now works properly.

It appears like Google is quickly resolving issues with Gemini on mobile, which is encouraging, and the Gemini app can produce excellent results. There are still kinks to iron out in terms of replacing Google Assistant on Android, but that should come with time.

Overall, the free version provides plenty of options, particularly for Android users. However, the commercial version of Gemini is significantly more in-depth.

Gemini Ultra (the model that powers the Gemini Advanced subscription package) brings a slew of sophisticated capabilities to the table, such as answering multi-step queries and assisting with more complicated tasks like coding. On a broader level, it is also more accurate in terms of providing better and more ordered responses to questions.

According to our sources, Gemini will soon be available in Google Docs, Gmail, and other Google productivity tools, but only for Advanced members.

Overall, the free version provides plenty of options, particularly for Android users. However, the commercial version of Gemini is significantly more in-depth.

Gemini Ultra (the model that powers the Gemini Advanced subscription package) brings a slew of sophisticated capabilities to the table, such as answering multi-step queries and assisting with more complicated tasks like coding. On a broader level, it is also more accurate in terms of providing better and more ordered responses to questions.

According to our sources, Gemini will soon be available in Google Docs, Gmail, and other Google productivity tools, but only for Advanced members.

Overall, the free version provides plenty of options, particularly for Android users. However, the commercial version of Gemini is significantly more in-depth.

Gemini Ultra (the model that powers the Gemini Advanced subscription package) brings a slew of sophisticated capabilities to the table, such as answering multi-step queries and assisting with more complicated tasks like coding. On a broader level, it is also more accurate in terms of providing better and more ordered responses to questions.

According to our sources, Gemini will soon be available in Google Docs, Gmail, and other Google productivity tools, but only for Advanced members.

Finally, it’s worth noting that Google is already developing the next-generation Gemini 1.5 model, which will take both LLMs to new heights with the Gemini 1.5 Pro and Gemini 1.5 Ultra.

Gemini 1.5 Pro is now in early testing and can handle longer prompts, giving what Google describes as “dramatically enhanced performance” no less. In a test, Gemini 1.5 went through a 400-page transcript of the Apollo 11 lunar landing and was instructed to find “comedic moments” in the mission. It dutifully cooperated by selecting a few jokes told by the astronauts, which took barely 30 seconds. Impressive.

When was GEMINI released?

Google Gemini was published on February 8, and Google confirmed that it replaced Bard, who was put to pasture. Gemini was immediately available in both its free and premium versions, including Gemini Advanced. Google also began rolling out the Android app in the United States right away.

The US launch was completed in a week, so it moved quickly, but the Gemini Android app has yet to be seen overseas – though we’re informed it will be available in additional countries soon. While iOS distribution lagged behind, Gemini is now accessible on Apple handsets (as part of the Google app, albeit in a more limited version than Android).

Is Google Gemini free?

The regular version of Google Gemini is free, although it has less features than the commercial AI. As previously stated, the free Gemini AI is based on a simpler model, whereas those who pay a subscription for Gemini Advanced benefit from far more features and capabilities.

How much is Gemini Advanced? Google charges $19.99 / £18.99 / AU$32.99 a month, but you can test it for free for a limited period thanks to a two-month trial offer. The membership, however, comes with additional benefits, as Gemini Advanced is part of the Google One AI Premium Plan, which includes 2TB of cloud storage among other perks.

Given that Google One with 2TB of storage already costs $9.99 / ₹838.18 / AU$12.49 per month, Gemini Advanced appears to be a better value offer.

How Do I Use Google GEMINI?

The way you use Google Gemini varies depending on the version you’re interested in and the product it’s integrated with.

You can use the AI on the Gemini website in the same manner you would communicate with an online chatbot (for example, Google Bard).

Alternatively, you can utilize the Gemini app on your Android phone (or replace Google Assistant with Gemini, as previously stated). On iOS, you can get Gemini capabilities through the Google app. Oh, and according to rumors, you may soon be able to use Gemini with your headphones.

Finally, there is an additional and unique option of subscribing to Gemini Advanced, for a full AI experience that includes sharper answers, dealing with difficult tasks and demanding creative needs, as well as the other perks listed above.

GEMINI VS GPT-4: WHAT IS THE DIFFERENCE?

How does Gemini compare to GPT-4 in the fight of huge language models?

For example, when Gemini was originally announced, Google claimed it was more sophisticated than GPT-4. In a blog post, Google published the results of eight text-based benchmarks, with Gemini winning seven of them. According to Google, Gemini won all ten multimodal benchmarks.

That would seem to imply that Gemini is the superior system, but it isn’t quite that simple. GPT-4 was released in March 2023, thus Gemini is essentially catching up with a competitor AI tool that is over a year old. We don’t know how capable OpenAI’s next version of GPT will be, and there are a lot of complexities in this contest beyond Google’s limited benchmarks, making it difficult to conclude which tool is genuinely superior at the present.

Furthermore, Google only tested its more advanced model, Gemini Ultra, against GPT-4, not Gemini Pro. Given the often-thin margins between GPT-4 and Gemini Ultra, it appears that OpenAI’s model outperforms Gemini Pro.

read more
Applications

What is ray tracing?

Ray tracing is one of the most exciting developments in PC gaming in recent years, allowing studios to create much more vibrant and realistic game environments. While it is not yet widely used in the industry, ray tracing capability is being added to an increasing number of top PC games.

So, what exactly is ray tracing, why is it not supported in more games, and what does it require to function properly? We’ll answer all of these questions in the explainer below. We’ll also tell you which graphics cards perform the best at ray tracing, which is essential if you want to maximize ray tracing speed.

RAY TRACING EXPLAINED

1.What is Ray Tracing?

Ray tracing is a rendering technique that can provide extremely realistic lighting effects. It is simply an improved and realistic method of simulating light and shadows in a scene.

An algorithm tracks the course of a light source and simulates how it interacts with various things it encounters, including the formation of shadows and reflections. It also supports more realistic translucence and scattering effects.

2.Why is ray tracing important?

Ray tracing support is relatively new and not widely available due to the computing resources required to simulate realistic lighting, as the algorithm must calculate where light interacts with virtual objects and calculate the interactions and interplay in the same way that the human eye processes light, shadows, and reflections in real life.

Before ray tracing was employed in games, a technique called as rasterization was common. This converts 3D visuals to 2D pixels for display on your screen and using shaders to provide realistic lighting.

3.Why isn’t ray tracing more popular?

Ray tracing is widely utilized in CGI-based films and television shows because production firms can leverage entire server farms or cloud computing to do the computations required to create computer graphics imagery.

However, video games have only lately been able to take advantage of it thanks to developments in PC gaming; and even now, there are certain restrictions, as it is far too demanding for current graphics cards to pull off a fully ray-traced game in the same vein as movie and TV program CGI.

However, graphics cards, particularly Nvidia’s GeForce RTX line, have advanced at a rapid pace and can execute more calculations.

4.What graphics cards enable ray tracing?

In general, if you want a graphics card that supports ray tracing while also providing a solid frame rate and well-rounded performance, the Nvidia GeForce RTX 3060 Ti and AMD Radeon RX 6800 XT and higher will deliver, with at least 40fps depending on the game.

However, if you want better ray-tracing performance, investing in the current-generation RTX 4000- or RX 7000-series is a wise decision. This guarantees that the graphics card is capable of handling the algorithms required to provide stunning lighting effects while maintaining a high frame rate.

Final Thoughts


While ray tracing is a remarkable technology in and of itself, various tools can help it function smoothly. Nvidia’s GPUs support Deep Learning Super Sampling (DLSS), while AMD offers FidelityFX Super Resolution (FSR).

When comparing the two, DLSS performs better, but it is only compatible with Nvidia cards. Meanwhile, FSR has improved significantly but still lags behind DLSS, despite being compatible with both Nvidia and AMD graphics cards.

Some argue that DLSS is a superior technology to ray tracing because the latter is significantly less taxing on your gaming system while improving overall performance. Regardless, it will be interesting to observe how ray tracing develops in the next years.

read more
Applications

Windows 11 is about to remedy two of the most aggravating problems

We’re always keeping a watch on the early beta versions of Windows 11 to see what’s coming up, so we’re delighted to see two minor but useful changes on the way, which should be available to everyone shortly.

First and foremost, it appears like Microsoft will finally include an option to hide the news feed from the widgets box. This has been enabled in version 26058 (via XDA Developers), allowing you to check the weather or sports scores without being overwhelmed with the newest news from around the web.

The new view is simply named My Widgets, and the thought is that it may have been implemented in part to placate EU authorities, who are keen to allow consumers as much flexibility as possible. However, based on Windows 11’s Dev and Canary channels, this modification will be available worldwide.

To access widgets, click the widgets icon on the taskbar. It should be to the left of the other icons and may already be displaying dynamic information (such as the weather or a traffic alert); otherwise, it is a white rectangle adjacent to a blue rectangle.

Clearer cutting, copying, and pasting

Second, in accordance with Build 26058 (via MSPowerUser), Windows 11 adds text labels to the cut, copy, and paste icons that display when you right-click in File Explorer. If you’ve ever peered at a pop-up menu to figure out where to click, you’ll understand how useful these labels are.

Of course, you can still use the familiar keyboard shortcuts if you like, but for those of us who utilize File Explorer’s context menus, this should make a major difference in preventing files from being moved or copied to the wrong location.

For further information, see Microsoft’s blog post on the latest update. Another feature to keep an eye out for is a new crosshairs option for the cursor (see above), which is designed to help low vision users select items more accurately.

As always, Microsoft’s plans are subject to change, and features that appear in preview versions of Windows are not necessarily made available to all users. However, these fixes appear to have a high chance of reaching it, so we’re looking forward to seeing them arrive.

read more