Snag Beats earbuds as a last-minute gift for up to $90 off

Snag Beats earbuds as a last-minute gift for up to $90 off

technology By Dec 22, 2023 No Comments

Eye floaters Have you ever noticed specks or strings drifting across your vision?

If so, you may be one of worldwide who experience eye floaters at some point.

While this condition is not typically considered severe, it can still be frustrating and uncomfortable, affecting your vision and quality of life.

Usually, the brain ignores these floaters, and they go unnoticed.

However, when they increase in number or become more concentrated in a particular area, they can cause annoyance or discomfort.

There are several treatments available to help ease the symptoms of eye floaters.

This article will discuss the different options and advances in artificial intelligence that can educate patients and enhance the management of this common condition.

What are Floaters?

Eye floaters are specks or strands that can appear in the field of vision and move around when the eyes move.

These floaters are caused by the shrinking of the vitreous, a gel-like substance that fills the eye.

The vitreous becomes more liquid and less gel-like as we age, causing the collagen fibers to clump together and form specks or strands.

Diagram of floaters but Exeter Eye While most floaters are harmless and do not require treatment, some may indicate a severe eye condition, such as retinal detachment or inflammation.

Aside from that, eye floaters can be distracting and, at times, concerning significantly when their numbers increase.

Some individuals may experience significant interference with their vision, compromising their quality of life and overall well-being.

Treating Eye Floaters Fortunately, several current treatments address eye floaters, with various approaches depending on the individual’s condition and overall severity of symptoms.

Laser Floater Treatment (LFT) is a non-surgical procedure performed in the office that is used as one of the current treatments for eye floaters.

LFT uses laser light to dissolve eye floaters, reducing their visibility in the patient’s field of vision.

It has emerged as a viable alternative for people who are hesitant to undergo surgery, as it poses minimal risk of complications and is a less invasive option than surgical treatments.

Studies have shown that LFT is effective in treating eye floaters.

, which assessed the safety and effectiveness of YAG laser vitreolysis as a treatment for vitreous floaters, concluded that LFT is a suitable option for patients hesitant to undergo surgery due to its minimal risk of complications.

The study also found that LFT shows promising results in improving subjective and objective outcomes for symptomatic floaters.

However, given the limited available evidence, further research is needed to determine the exact role of YAG laser vitreolysis in treating vitreous floaters.

Another current treatment option for eye floaters is vitrectomy, a surgical procedure that involves removing the vitreous, the jelly-like substance inside the eye where the floaters are suspended.

This approach is relatively uncommon and is typically reserved for extreme cases where the floaters interfere significantly with the individual’s vision.

Vitrectomy carries some risks, including infection and retinal detachment.

Eye floaters can also be treated through medical management and patient education, especially if an underlying medical condition causes them.

For instance, if the floaters result from inflammation or bleeding due to diabetes, controlling or treating the underlying condition can help alleviate the floaters.

The Role of Patient Education in Treating Floaters Educating patients regarding eye floaters and related disorders is paramount to successful treatment.

Patients should understand the condition, including its causes and available treatment options.

Additionally, they should be informed about the risk factors, such as age, eye trauma, and specific medical conditions that can lead to eye floaters.

By being mindful of these risk factors, patients can take preventative measures to avoid the development of eye floaters.

examined AI chatbots for patient education on retinal floaters.

The study assessed multiple AI chatbots and their ability to help provide practical and actionable patient education.

It also highlights AI systems’ accuracy in answering questions related to floaters from a retinal specialist point of view.

The researchers found that both ChatGPT™ and Google Assistant™ had weak scores, indicating the bots were inadequate in providing in-depth specialist information.

Additionally, while AI chatbots can be a helpful tool for patient education, they should not replace the need for a qualified healthcare professional.

Patients must be encouraged to seek medical advice and not rely solely on AI chatbots for diagnosis and treatment.

To a Clearer Future Eye floaters are a common condition that can be uncomfortable and negatively impact one’s quality of life significantly as the number of floaters increases.

Fortunately, several treatments are available for eye floaters, including Laser Floater Treatment (LFT).

Studies show that LFT effectively treats vitreous floaters and has promising results in improving subjective and objective outcomes for symptomatic floaters.

As technology advances and shapes the healthcare industry, the future of eye floater treatment looks promising.

Integrating artificial intelligence and human expertise can improve patient care by providing accurate and tailored information and guidance to patients and caregivers, helping them make informed decisions about their health.

We can look forward to a future where AI and human expertise work together to provide the best possible patient care.

To learn more about the eye, read more stories at

In 2023, artificial intelligence and generative AI were pushed into the spotlight when OpenAI commoditized them.

In a short time, products like ChatGPT and copilot tools, built on generative AI, progressed user productivity and minimized the time invested to perform basic tasks.

As a basic example, users can now swiftly edit an email for grammatical errors using tools such as Grammarly, illustrating the efficiency gains achieved through AI.

Beyond productivity, these advances are transforming information into a company-wide commodity—ensuring widespread accessibility and further elevating overall efficiency.

As organizations continue to leverage tech to power improved efficiency and productivity, here are three predictions I’ll be watching as we head into 2024.

1.

The use of speech and bots will influence the trajectory of low-code development.

Low-code development isn’t coming; it’s already here.

Gartner, Inc. that “by 2025, 70% of new applications developed by organizations will use low-code or no-code technologies.”

This phenomenon has enabled business technologists with little or no coding know-how to create solutions that help their organizations meet their business needs.

Now, as AI is integrated into low-code platforms, I foresee a transition toward building applications using natural language or bots rather than further maturity in traditional low-code interfaces.

This is one of the most promising developments in this space, in my opinion, and it’s likely to continue its upward trajectory and ultimately contribute to a notable advancement in the maturity of low-code development.

I predict that in the next few years, a substantial number of individuals will opt to leverage low-code and no-code platforms specifically to automate tasks through bots rather than the conventional use of low-code and no-code for automation.

As the technology becomes more sophisticated, adoption will continue to rise, and ROI will improve as AI-driven capabilities evolve.

2. Generative AI copilots will become a norm across the IT landscape.

As the adoption of generative AI broadens, I anticipate widespread integration of tools such as Copilot across various domains—encompassing automation, iPaaS, API management, low-code platforms, application development and other related products.

This extends to its utilization in security operations (SecOps), artificial intelligence operations (AIOps), DevOps and various facets of automation.

In essence, Copilot is poised to become ubiquitous across diverse technological landscapes, which will play a significant role in making teams much more productive.

3. Real-time speech translation will take center stage.

If 2023 was the year of generative AI and large language models, then meaningful advances in real-time voice and speech translation will define the next few years.

In today’s globalized workforce, it’s not uncommon for professionals to collaborate and communicate across borders—even when they don’t speak the same language.

While it may be relatively straightforward when teams from English-speaking countries like the U.S., Canada, the U.K. or India collaborate, challenges arise when additional colleagues from countries such as France, Brazil or China are involved.

Manual translation of languages is time-consuming, but new and emerging AI capabilities are poised to make this hurdle a thing of the past.

Progress is already being made.

Meta released a speech-to-text model that can translate nearly 100 languages as the company continues to try to make a .

Real-time speech translation can not only enhance efficiency and productivity but also empower businesses to expand their global operations more effectively.

When you package these three predictions together, a common theme emerges: productivity.

Taking a step back and looking at the bigger picture reveals that organizations of all sizes and across industries are looking for ways to embrace AI and automation in a way that makes them more efficient and allows them to intensify their focus on their ultimate business goals.

While these technologies may not fully hit their primes in 2024, they should advance enough for businesses to use them in meaningful ways that drive their organizations forward.

Ultimately, substantial progress across the areas described above will contribute to the hyperautomation of businesses.

AI has the potential to transform how businesses function, improving automation in multiple areas.

With its ability to identify, vet and automate complex tasks and processes at an unprecedented pace, AI will create a critical inflection point in the coming year.

I’m excited to see how the future unfolds as AI becomes instrumental in discovering better business processes, identifying potential automation opportunities and prioritizing tasks based on their ROI.

is an invitation-only community for world-class CIOs, CTOs and technology executives.

In context: It’s no secret that Nvidia is dominating in the AI space, with companies big and small using its GPUs as well as the CUDA software stack to power their machine-learning projects.

Intel CEO Pat Gelsinger says Team Green’s success is partly the result of luck, as the latter company enjoyed 15 years of Intel being inactive in the discrete GPU space.

Whether Intel can get lucky in the coming years now that Gelsinger is back at the helm Intel CEO Pat Gelsinger believes Nvidia’s dominance in AI is more the result of luck and not necessarily the software and hardware that Team Green has been developing over the past several years.

As you’d expect, it didn’t take long for someone from Nvidia’s machine learning group to fire back at his remark, though Gelsinger did explain how Intel might also get lucky in the coming decades.

During an interview hosted by the Massachusetts Institute of Technology (MIT), Team Blue’s chief was asked about Intel’s AI hardware efforts and whether he believes they represent a competitive advantage.

Gelsinger began by lamenting Intel’s past mistakes, noting that Nvidia CEO Jensen Huang “got extraordinarily lucky” with his bet on AI and Intel could have been just as lucky had the company not given up on the Larrabee discrete GPU project.

Gelsinger went on to explain how his departure from Intel 13 years ago set the company on a bad trajectory where projects like Larrabee that “would have changed the shape of AI” got canceled, allowing Nvidia to thrive with very little competition in the high-performance computing space.

Then he characterized Jensen Huang as a hard worker who was initially laser-focused on graphics technology but then was lucky enough to branch out into AI accelerators when the industry started moving in that direction.

Related reading: The Last Time Intel Tried to Make a Graphics Card Now that he’s at the helm of Intel, Gelsinger is determined to course-correct with a strategy of “democratizing” AI.

To that end, Intel is looking to bake a neural processing unit (NPU) into every machine and we can already see that with the launch of the Meteor Lake CPU lineup.

Another area where Intel will focus is software, with a lot of work being put into developing open-source software libraries to eliminate the need for proprietary technologies like CUDA.

Moving forward, Gelsinger says we can expect at least two decades of innovations in the AI space.

He believes that since AI today is mostly used to tap into simple data sets like text to create services like ChatGPT, there’s a lot of room for advancements in training AI models for a variety of other applications using more complex data sets.

Demand for AI hardware is growing rapidly, so Intel’s investments into new chip factories could also pay off in spades in the coming years.

Even Nvidia is contemplating using Intel as a manufacturing partner, and it will be interesting to see if the latter company can use that interest to help its foundry business succeed in the long-term.

I worked at Intel on Larrabee applications in 2007.

Then I went to NVIDIA to work on ML in 2008.

So I was there at both places at that time and I can say: NVIDIA’s dominance didn’t come from luck.

It came from vision and execution.

Which Intel lacked.

https://t.co/ygUJZIQWLH Gelsinger’s remarks look more like an admission that Intel made a big mistake in giving up on its discrete GPU ambitions for over a decade, but they still invited a response from Nvidia’s VP of Applied Deep Learning Research, Bryan Catanzaro.

Catanzaro explains that he was part of the Larrabee project at Intel before moving on to work at Nvidia and, from his point of view, Nvidia’s dominance came from executing a vision that Intel simply lacked.

Permalink to story.

https://www.techspot.com/news/101309-intel-ceo-nvidia-extremely-lucky-become-dominant-force.html

In context: It’s no secret that Nvidia is dominating in the AI space, with companies big and small using its GPUs as well as the CUDA software stack to power their machine-learning projects.

Intel CEO Pat Gelsinger says Team Green’s success is partly the result of luck, as the latter company enjoyed 15 years of Intel being inactive in the discrete GPU space.

Whether Intel can get lucky in the coming years now that Gelsinger is back at the helm Intel CEO Pat Gelsinger believes Nvidia’s dominance in AI is more the result of luck and not necessarily the software and hardware that Team Green has been developing over the past several years.

As you’d expect, it didn’t take long for someone from Nvidia’s machine learning group to fire back at his remark, though Gelsinger did explain how Intel might also get lucky in the coming decades.

During an interview hosted by the Massachusetts Institute of Technology (MIT), Team Blue’s chief was asked about Intel’s AI hardware efforts and whether he believes they represent a competitive advantage.

Gelsinger began by lamenting Intel’s past mistakes, noting that Nvidia CEO Jensen Huang “got extraordinarily lucky” with his bet on AI and Intel could have been just as lucky had the company not given up on the Larrabee discrete GPU project.

Gelsinger went on to explain how his departure from Intel 13 years ago set the company on a bad trajectory where projects like Larrabee that “would have changed the shape of AI” got canceled, allowing Nvidia to thrive with very little competition in the high-performance computing space.

Then he characterized Jensen Huang as a hard worker who was initially laser-focused on graphics technology but then was lucky enough to branch out into AI accelerators when the industry started moving in that direction.

Related reading: The Last Time Intel Tried to Make a Graphics Card Now that he’s at the helm of Intel, Gelsinger is determined to course-correct with a strategy of “democratizing” AI.

To that end, Intel is looking to bake a neural processing unit (NPU) into every machine and we can already see that with the launch of the Meteor Lake CPU lineup.

Another area where Intel will focus is software, with a lot of work being put into developing open-source software libraries to eliminate the need for proprietary technologies like CUDA.

Moving forward, Gelsinger says we can expect at least two decades of innovations in the AI space.

He believes that since AI today is mostly used to tap into simple data sets like text to create services like ChatGPT, there’s a lot of room for advancements in training AI models for a variety of other applications using more complex data sets.

Demand for AI hardware is growing rapidly, so Intel’s investments into new chip factories could also pay off in spades in the coming years.

Even Nvidia is contemplating using Intel as a manufacturing partner, and it will be interesting to see if the latter company can use that interest to help its foundry business succeed in the long-term.

I worked at Intel on Larrabee applications in 2007.

Then I went to NVIDIA to work on ML in 2008.

So I was there at both places at that time and I can say: NVIDIA’s dominance didn’t come from luck.

It came from vision and execution.

Which Intel lacked.

https://t.co/ygUJZIQWLH Gelsinger’s remarks look more like an admission that Intel made a big mistake in giving up on its discrete GPU ambitions for over a decade, but they still invited a response from Nvidia’s VP of Applied Deep Learning Research, Bryan Catanzaro.

Catanzaro explains that he was part of the Larrabee project at Intel before moving on to work at Nvidia and, from his point of view, Nvidia’s dominance came from executing a vision that Intel simply lacked.

Apple Mac Pro MSRP $3,999.00 Score Details DT Editors’ Choice “The Mac Studio is a diminutive floating monolith that packs a punch.”

Pros Superior build quality Small chassis Solid connectivity Excellent creativity performance Quiet operation Elegant aesthetic Cons Expensive Mediocre gaming The Mac Studio came out in 2022 and was updated with the latest M2 chips in 2023.

Filling the ground between the affordable Mac mini and the expandable Mac Pro, the Mac Studio is a classic middle child fighting for an identity.

Contents Specs and configurations A floating block of pure industrial design Very fast, very small, and very quiet, but not the fastest around Can it game?

Superior display support Expensive, but worth it Show 1 more item It’s strength, of course, is that the Mac Studio packs a lot of power into a very small format and plenty of ports.

In my transition to an all-Apple ecosystem , moving my desktop tower was saved for last, and I came into my testing to see how well it might replace it.

To my surprise, I found it a highly capable machine, despite its limited expandability.

Related This is one of my favorite Windows laptops.

But can it beat the MacBook Air?

Apple just dashed our iMac hopes and dreams Why the MacBook Air is still stuck on the M2 Specs and configurations Dimensions (HxWxD) 3.7 x 7.7 x 7.7 inches CPU/GPU M2 Max 12-core CPU/30-core GPU M2 Max 12-core CPU/38-core GPU M2 Ultra 24-core CPU/60-core GPU M2 Ultra 24-core CPU/76-core GPU Case Apple CNC aluminum Memory 32GB unified (M2 Max) 64GB unified 96GB unified (M2 Max 38-core GPU) 128GB unified (M2 Ultra) 192GB unified (M2 Ultra) Storage 512GB SSD (M2 Max) 1TB SSD 2TB SSD 4TB SSD 8TB SSD Power supply Apple 370W USB ports 4 x USB-C with Thunderbolt 4 on rear 2 x USB-C on front (M2 Max) 2 x USB-C with Thunderbolt 4 on front (M2 Ultra) 2 x USB-A on back 1 x HDMI 1 x 10GB Ethernet 3.5mm audio jack on back SD card reader on front Wireless Wi-Fi 6E, Bluetooth 5.3 Price $1,999-plus The Mac Studio is available in two versions based on the chipset, either the M2 Max or the M2 Ultra.

The table above shows that important configuration differences exist, including the amount of storage and RAM that can be selected and the port configuration.

These things are important to remember when equipping your Mac Studio.

The least you’ll spend on a Mac Studio is $1,999, for an M2 Max 12/30, 32GB of RAM, and a 512GB SSD.

The most you’ll spend for an M2 Max version is a much more expensive $5,399 for an M2 Max 12/38, 96GB of RAM, and an 8TB SSD.

The Mac Studio with the M2 Ultra starts at $3,999 for an M2 Ultra 24/60, 64GB of RAM, and a 1TB SSD.

Fully configured, the Mac Studio is a whopping $8,799 for an M2 Ultra 24/76, 192GB of RAM, and an 8TB SSD.

This places the Mac Studio as a very premium desktop, and you’ll find Windows desktops that are more affordable and offer better performance.

Note also that the Mac Studio is a sealed enclosure with no expandable components.

What you buy is what you’ll get, forever, so choose wisely.

A floating block of pure industrial design Mark Coppock / Digital Trends As is typical with Apple PCs today, the Mac Studio looks chiseled from a single block of aluminum — which is essentially what it is.

The MacBooks have that aspect when they’re closed, and so does the Mac Mini.

Even the iMac shares the appearance, albeit with a pane of glass melded in.

The Mac Studio epitomizes the same starkly elegant design, with any vents and seams hidden on the bottom or on the back, out of the way.

Look at the Mac Studio from the usual slightly top-down angle when it’s sitting on a desktop, and it looks like a silver monolith floating a quarter-inch or so off the surface.

A large chrome Apple logo sits on top as the only adornment.

I like the aesthetic quite a bit.

There are two USB-C ports and an SD card reader upfront, but those look like carefully crafted cutouts.

The USB-A, Ethernet, and HDMI ports are on the back, likely because those would expose more of their untidy internals.

Having one or more of the former on the front might be more convenient, but that would break up the illusion.

The back is less tidy, with four USB-C ports, an audio jack, and a power connection to go with an exhaust vent that extends along the entire length.

Note that the four USB-C ports on the back support Thunderbolt 4 on both chipset versions, while the front two are Thunderbolt 4 only with the M2 Ultra.

Mark Coppock / Digital Trends Clearly, Apple was going for a particular aesthetic, and that’s well-accomplished at the cost of some minor convenience.

The Mac Studio is also remarkably small, given the power inside.

It’s less than 8 inches on each side and less than 4 inches tall, almost the same dimensions as two Mac Minis stacked on top of each other.

It fits quite easily under my center 27-inch display, which is held up by a dual-monitor arm.

The cables connecting to the three monitors and the Ethernet jack on my router are hidden in the back, making for an uncluttered appearance.

If you’re connecting wirelessly to the internet, as well as to your keyboard and mouse, you’ll appreciate the Wi-Fi 6E and Bluetooth 5.3 support.

I’m using Apple’s Magic Keyboard with Touch ID, which is just as excellent as the keyboard on my MacBook Pro 14 .

My mouse is the Logitech MX Master 3S, which works perfectly with the Mac Studio and can also support my Windows desktop, which remains ready for standby use.

I also connected to external speakers via the 3.5mm audio jack, because the built-in speaker is quite weak and best for system sounds only.

If you want to connect a pair of headphones, you’ll be bummed that the connection is on the back, but you’ll appreciate the support for high-impedance cans.

Overall, the Mac Studio is an incredibly well-designed desktop that’s minimalist and incredibly attractive.

It exudes solidity and quality and epitomizes Apple’s fastidious design sensibilities.

You can find Windows mini-PCs that are similarly sized, like the HP Z2 Mini G9 , but in my opinion, they’re not nearly as cohesively designed.

Very fast, very small, and very quiet, but not the fastest around Mark Coppock / Digital Trends My Mac Studio is configured with the M2 Ultra , which is essentially two M2 Max chips glued together.

It’s a bit more complicated than that, of course, but compare the specs and you’ll find twice the CPU and GPU cores.

I chose the base 24-core CPU/60-core GPU rather than the 24-core/76-core version, simply because the latter is an additional $1,000 and the former is already overkill for my workflow.

Both versions have CPUs with 16 performance cores and eight efficiency cores, the same 32-core Neural Engine, and a whopping 800GB/s of memory bandwidth.

The only difference is the extra 16 GPU cores in the more expensive version.

As I ran through our suite of benchmarks, I was amazed at how quiet the Mac Studio remained throughout.

No matter how demanding the process, the machine remained essentially inaudible.

I had to put my ear to the Mac Studio to hear the fans running.

It clearly benefits from a well-engineered cooling system, and it’s orders of magnitude quieter than my Windows desktop (a medium-sized tower that sits on the floor next to my desk).

Windows mini-PCs are also much louder under high loads.

Even the MacBook Pro gets louder when working hard.

Speaking of that, it’s important to take a moment to consider the MacBook Pro that’s now available with up to the M3 Max .

That chipset is manufactured with the new 3nm node, improving on the 5nm process used in the M1 and M2 chipsets.

That transition offers additional power and efficiency, letting Apple retain the same core counts with the new chipsets while boosting performance.

More important, though, are the improvements to the GPU, which now utilizes new technology like Dynamic Caching, hardware-accelerated ray tracing, and mesh shaders.

The improved GPU shows up in gaming and applications that can utilize the GPU to speed up various processes.

Even the Neural Engine has been updated and is now considerably faster, and offers an advanced media engine that enables better hardware acceleration for various video codecs.

Importantly, these updates speed up creative applications, making the new chipset even more powerful for creators.

Mark Coppock / Digital Trends As we’ll see in the benchmark results, the MacBook Pro 14 with the M3 Max with 16 CPU cores and 40 GPU cores, a 16-core Neural Engine, and a 400GB/s memory bandwidth offers roughly the same performance as the M2 Ultra 24/60.

Even the M2 Ultra 24/76 wouldn’t greatly exceed the MacBook’s performance.

At some point in the future, probably in mid- to late 2024, Apple will likely introduce a Mac Studio with an M3 Ultra, which will certainly offer vastly improved performance over the M2 Ultra reviewed here.

The important takeaway from these results is that the Mac Studio with the M2 Ultra is a very fast PC, especially considering its diminutive stature and quiet operation.

It’s not as fast as the fastest Windows desktops or the MacBook Pro with the M3 Max in certain benchmarks.

In some cases, particularly where the GPU is concerned, it’s considerably slower.

In the all-important PugetBench Premiere Pro benchmark that runs in a live version of Adobe’s Premiere Pro, the Mac Studio is 10% faster than the MacBook and 17% slower than a powerful Windows desktop.

You can certainly build or buy a Windows desktop that will provide faster performance at a lower price, whether you’re a gamer or a creator.

But for MacOS users, the Mac Studio remains the best performance option, albeit by a slim margin over the M3 Max.

Cinebench R24 single-core Cinebench R24 multi-core Cinebench R24 GPU Handbrake (in seconds)

Pugetbench for Premiere Pro Mac Studio (M2 Ultra 24/60) 120 1,870 7,727 56 978 MacBook Pro 14 (M3 Max 16/40) 139 1,522 12,765 53 889 Custom Windows PC (Core i9-13900K/RTX 4090) 126 2083

34,230 N/A 1,148 Alienware Aurora R16 (Core i7-13700F/RTX 4070)

112 1,070 16,974 N/A 828

MacBook Pro (M2 Max) 121 1032 5592 85 N/A iMac (M3 8/10) 140 657 3728 112 N/A Can it game?

Gaming on a Mac is better than ever, but it still lags Windows by a considerable margin.

The biggest issue is the availability of top titles for MacOS.

Simply put, it’s slim pickings.

When I check my Steam account, I find only a handful of games that will run on my Mac Studio.

I’m not a big gamer, so gaming performance isn’t terribly important to me.

Some decent games are available for the Mac, though, and I’ve checked out a few.

First, in Civilization VI , the Mac Studio managed 65 frames per second (fps) at 1080p and ultra graphics.

That’s well below Windows gaming PCs, but still playable, and oddly enough, the Mac Studio achieved almost exactly the same fps as I ramped up the resolution to 4K. I also ran Baldur’s Gate 3 and Fortnite , and both ran well at their default settings.

The M3 Max in the MacBook Pro will provide much stronger gaming, as will well-equipped Windows desktops.

I wouldn’t recommend buying the Mac Studio for gaming, especially at such high prices.

Superior display support Looking at these benchmark results, you may be considering a MacBook Pro with the M3 Max rather than the Mac Studio.

That’s a reasonable proposition, but if you’re looking for a desktop configuration, you’ll want to carefully consider display support.

The Mac Studio with the M2 Max supports up to five displays.

That’s four displays with 6K resolution at 60Hz via Thunderbolt 4, and another 4K display at 60Hz over the HDMI ports.

You can also attach two displays at 6K and 60Hz and one 8K display at 60Hz or a 4K display at up to 240Hz via HDMI.

The M2 Ultra version supports up to eight displays at 4K and 60Hz, up to six displays at 6K and 6Hz, and up to three displays at 8K and 60Hz.

The HDMI port also supports a 4K display up to 240Hz.

Of course, other combinations are also possible.

The MacBook Pro with the M3 Max supports up to four external displays, with three at 6K and 60Hz and one 4K resolution at up to 144Hz via HDMI.

Or, you can run three external displays, one at 6K at 60Hz, one at 8K at 60Hz, and one 4K at up to 240Hz via HDMI.

It’s complicated, but the bottom line is that the Mac Studio supports more displays at higher resolutions and refresh rates.

If you plan a complex multi-monitor setup, the Mac Studio is the better choice.

Of course, as a desktop, you won’t be plugging and unplugging all those displays, although you could simplify things by using a Thunderbolt 4 dock with the MacBook Pro. Expensive, but worth it The Mac Studio is a standout desktop PC .

It’s incredibly small given that it offers such excellent performance, and I’m still amazed at how quietly it runs even while working hard.

The quality can’t be beat, which is typical of Apple products, and it provides plenty of connectivity and display support.

It’s also expensive, especially if you select the faster M2 Ultra model.

You could buy a MacBook Pro 16 with similar specs and an M3 Max for a couple of hundred dollars more and enjoy equal or better performance, depending on your application.

But the Mac Studio will support more displays and provide more ports out of the box.

If you’re looking for a powerful desktop PC and are devoted to macOS, then the Mac Studio is an excellent choice.

Editors’ Recommendations New MacBooks are coming, but they aren’t worth waiting for The MacBook Pro M3 doesn’t have a memory problem — it has a pricing problem Apple’s

M3 Max appears to keep up with Intel’s top desktop CPU The M3 Max makes the MacBook Pro look like a nearly unbeatable laptop Everything announced at Apple’s ‘Scary Fast’ event: iMac, M3, and more

The steady march towards AI job automation continues — even at companies pioneering the tech.

According to a new report by The Information , search giant Google is looking to reassign or let go of sales workers whose jobs were automated by the company’s new AI tools.

While it’s unclear how many humans will end up being affected, it’s a clear sign of the times.

Earlier this year, Google ushered in a “new era of AI-powered ads.”

As part of the initiative, Google is trying to leverage AI tech to “deliver new ad experiences,” including “automatically created assets” that scrape content from existing ads and landing pages.

Some of these ads created by the company’s Performance Max feature can even change in real-time based on click-through rates to maximize visibility, a task that’s labor-intensive for human workers.

According to the Information , a “growing number of advertisers have adopted PMax since,” which has eliminated the “need for some employees who specialized in selling ads for a particular Google service.”

Per the report, almost half of the company’s 30,000-employee ad division was once dedicated to this kind of work.

It’s a notable shift for Google‘s business, as advertising makes up a huge chunk of the company’s revenue.

By replacing human workers, the company is presumably aiming to increase profit margins by cutting costs.

But at what cost?

We’ve already seen several industries being affected by AI-driven job automation.

Earlier this year, IBM CEO Arvind Krishna told Bloomberg that the company is slowing or suspending hiring for any jobs that could be done by an AI.

“I could easily see 30 percent of that getting replaced by AI and automation over a five-year period,” Krishna told the publication at the time, which means that in total, AI could replace up to 7,800 jobs.

German tabloid Bild , which is owned by media publisher Axel Springer, similarly announced that it would “unfortunately be parting ways with colleagues who have tasks that in the digital world are performed by AI and/or automated processes,” according to a leaked email obtained by German newspaper Frankfurter Allgemeine .

Certain low-level jobs in particular are on the chopping block amidst the rise of AI technologies.

“It was [a] no-brainer for me to replace the entire [customer service] team with a bot,” Suumit Shah, a 31-year-old CEO of an Indian e-commerce platform called Dukaan, told the Washington Post in October, “which is like 100 times smarter, who is instant, and who cost me like 100th of what I used to pay to the support team.”

In short, AI is already snatching up jobs left and right — and according to a study by the McKinsey Global Institute, that trend could accelerate faster than anybody expected .

Goldman Sachs found in its research report earlier this year that roughly 300 million jobs could soon be lost due to AI.

More on AI job automation: IBM Replacing 7,800 Human Jobs With AI

The International Criminal Police Organisation better-known as Interpol has announced the arrest of 3,500 alleged cybercriminals and scammers, alongside the seizure of $300 million in cash and digital assets across 34 nations.

The arrests marked the conclusion of what Interpol calls Operation HAECHI IV, a six-month investigation funded by South Korea.

The operation tackled seven particular types of cyberscam that Interpol lists as: “voice phishing, romance scams, online sextortion, investment fraud, money laundering associated with illegal online gambling, business email compromise fraud, and e-commerce fraud.”

It says that 82,112 suspicious bank accounts were blocked, and it seized $199 million in hard currency and $101 million in virtual assets.

Interpol says the majority of cases (75%) involved investment fraud, business email compromise and e-commerce fraud.

“The seizure of USD 300 million represents a staggering sum and clearly illustrates the incentive behind today’s explosive growth of transnational organized crime,” said Stephen Kavanagh of Interpol.

“This represents the savings and hard-earned cash of victims.

This vast accumulation of unlawful wealth is a serious threat to global security and weakens the economic stability of nations worldwide.”

Kavanagh also gives a shout-out to agents in Philippines and Korea, whose co-operation led to the arrest in Manila of a particularly high profile online criminal sought under a Korean Red Notice, and the dismantling of the illegal gambling network he led.

“Despite criminals’ endeavors to gain illicit advantages through contemporary trends, they will eventually be apprehended and face due punishment,” said the head of Interpol’s Korea bureau Kim Dong Kwon.

“To accomplish this, Project HAECHI will consistently evolve and expand its scope.”

Interpol has issued two Purple Notices based on the investigation, which warn countries about the kind of fraud practices it discovered.

One of these is a new scam in Korea involving the sale of NFTs “with promises of huge returns, which turned out to be a ‘rug pull’, a growing scam in the crypto industry where developers abruptly abandon a project and investors lose their money.”

This was accompanied by a picture of cat-themed picture art, presumably the NFTs involved in the rug pull.

The second is about the use of AI and deep fake technology to “lend credibility to scams by enabling criminals to hide their identities and to pretend to be family members, friends or love interests.”

This practice was discovered many times in the UK during the operation, where “AI-generated synthetic content” was used to defraud, harass and extort victims “particularly through impersonation scams, online sexual blackmail, and investment fraud.”

Other cases involved the impersonation of people known to the victims “through voice cloning technology.”

Which is pretty freaky stuff: I’d like to think I’d be able to recognise my own family members’ voices on the phone, but these things feel like they get more sophisticated by the week.

Countries from Argentina to Vietnam took part in Operation HAECHI IV, including India, Japan, Pakistan, South Africa, Spain, the United Kingdom and the United States.

And while little else about this story is amusing, it did tickle me that Interpol held a debrief meeting in Singapore to analyse the operation’s results: and all those hard-working agents smiled and posed for the equivalent of an office Christmas photo.

In 1963, six years after the first satellite was launched, editors from the Encyclopaedia Britannica posed a question to five eminent thinkers of the day: “Has man’s conquest of space increased or diminished his stature?”

The respondents were philosopher Hannah Arendt , writer Aldous Huxley , theologian Paul Tillich , nuclear scientist Harrison Brown and historian Herbert J. Muller .

Sixty years later, as the rush to space accelerates, what can we learn from these 20th-century luminaries writing at the dawn of the space age?

Much has happened since.

Spacecraft have landed on planets, moons, comets and asteroids across the Solar System.

The two Voyager deep space probes, launched in 1977, are in interstellar space.

Related: How long could you survive in space without a spacesuit?

A handful of people are living in two Earth-orbiting space stations.

Humans are getting ready to return to the Moon after more than 50 years, this time to establish a permanent base and mine the deep ice lakes at the south pole.

There were only 57 satellites in Earth orbit in 1963.

Now there are around 10,000 , with tens of thousands more planned.

Satellite services are part of everyday life.

Weather prediction, farming, transport, banking, disaster management, and much more, all rely on satellite data.

Despite these tremendous changes, Arendt, Huxley and Tillich, in particular, have some illuminating insights.

Huxley is famous for his 1932 dystopian science fiction novel Brave New World , and his experimental use of psychedelic drugs.

In his essay , he questioned who this “man” who had conquered space was, noting it was not humans as a species but Western urban-industrial society that had sent emissaries into space.

This has not changed.

The 1967 Outer Space Treaty says space is the province of all humanity, but in reality it’s dominated by a few wealthy nations and individuals.

Huxley said the notion of “stature” assumed humans had a special and different status to other living beings.

Given the immensity of space, talking of conquest was, in his opinion, “a trifle silly”.

Tillich was a theologian who fled Nazi Germany before the second world war.

In his essay he wrote about how seeing Earth from outside allowed us to “demythologise” our planet.

In contrast to the much-discussed “ overview effect ” which inspires astronauts with a feeling of almost mystical awe, Tillich argued that the view from space made Earth a “large material body to be looked at and considered as totally calculable”.

When spacecraft began imaging the lunar surface in the 1960s, the process of calculation started for the Moon.

Now, its minerals are being evaluated as commodities for human use.

Like Tillich, Arendt left Germany under the shadow of Nazism in 1933.

She’s best remembered for her studies of totalitarian states and for coining the term “ the banality of evil ”.

Her essay explored the relationship between science and the human senses.

It’s a dense and complex piece; almost every time I read it, I come away with something different.

In the early 20th century, Einstein’s theory of special relativity and quantum mechanics showed us a reality far beyond the ability of our senses to comprehend.

Arendt said it was absurd to think such a cosmos could be “conquered”.

Instead, “we have come to our present capacity to ‘conquer space’ through our new ability to handle nature from a point in the universe outside the earth”.

The short human lifespan and the impossibility of moving faster than the speed of light mean humans are unlikely to travel beyond the Solar System.

There is a limit to our current expansion into space.

When that limit is reached, said Arendt, “the new world view that may conceivably grow out of it is likely to be once more geocentric and anthropomorphic, although not in the old sense of the earth being the center of the universe and of man being the highest being there is”.

Humans would turn back to Earth to make meaning of their existence, and cease to dream of the stars.

This new geocentrism may be exacerbated by an environmental problem already emerging from the rapid growth of satellite megaconstellations.

The light they reflect is obscuring the view of the night sky , cutting our senses off from the larger cosmos.

But what if it were technologically possible for humans to expand into the galaxy?

Arendt said assessing humanity from a position outside Earth would reduce the scale of human culture to the point at which humans would become like laboratory rats, studied as statistical patterns.

From far enough away, all human culture would appear as nothing more than a “large scale biological process”.

Arendt did not see this as an increase in stature: The conquest of space and the science that made it possible have come perilously close to this point [of seeing human culture as a biological process].

If they ever should reach it in earnest, the stature of man would not simply be lowered by all standards we know of, but have been destroyed.

Sixty years on, nations are competing to exploit lunar and asteroid mineral resources.

Private corporations and space billionaires are increasingly being touted as the way forward.

After the Moon, Mars is the next world in line for “conquest”.

The contemporary movement known as longtermism promotes living on other planets as insurance against existential risk , in a far future where humans (or some form of them) spread to fill the galaxies.

But the question remains.

Is space travel enhancing what we value about humanity?

Arendt and her fellow essayists were not convinced.

For me, the answer will depend on what values we choose to prioritise in this new era of interplanetary expansion.

close Video How a race car was built entirely from recycled electronic waste A Formula E team turned electronic waste into a racing car that can compete with the best in the world.

Have you ever wondered what happens to your old electronics when you throw them away?

Do you know that they can end up in landfills, polluting the environment and wasting valuable resources?

Well, one Formula E team has found a creative way to turn electronic waste (e-waste) into a racing car that can compete with the best in the world.

CLICK TO GET KURT’S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK VIDEO TIPS, TECH REVIEWS, AND EASY HOW-TO’S TO MAKE YOU SMARTER The Recover-E race car (Envision Racing)

What is E-waste?

E-waste is any discarded electrical or electronic device that is no longer useful or wanted.

It can include anything from disposable vapes, mobile phones, laptops, MP3 players, plugs and batteries.

According to the Global E-waste Monitor, the world generated 53.6 million tonnes (118 billion pounds) of e-waste in 2019, and this figure is expected to reach 75 million tonnes (165 billion pounds) by 2030.

E-waste is not only a huge waste of resources, but also a major source of pollution and greenhouse gas emissions.

Why is E-waste so dangerous?

E-waste contains toxic substances such as lead, mercury, and cadmium, which can leach into the soil and water, harming wildlife and human health.

Also, e-waste often ends up in developing countries, where workers dismantle it without proper protection or equipment, exposing themselves and their communities to hazardous materials.

Why is e-waste so hard to manage?

One of the main reasons why e-waste is so hard to manage is that many people are unaware of its impacts and how to dispose of it properly.

There is also a lack of effective policies and regulations to promote e-waste reduction, reuse, and recycling.

Furthermore, there is a shortage of adequate infrastructure and facilities to collect, transport, and process e-waste safely and efficiently.

How Envision Racing is driving change with an e-waste race car Envision Racing is the team behind the world’s first Formula E car made entirely out of e-waste.

Envision Racing is not only a leading Formula E team, but also a champion of sustainability.

Its Race Against Climate Change program aims to inspire and empower fans and the public to take action on climate change.

The Recover-E race car being built (Envision Racing) MORE: HOW THE MOTOR CITY IS PAVING THE WAY FOR WIRELESS CHARGING ROADS EVERYWHERE

As part of this program, the team decided to create a car that would raise awareness of e-waste and its solutions.

They teamed up with Liam Hopkins, a British artist and designer who specializes in creating sculptures and installations from recycled materials.

They also collaborated with Music Magpie, a U.K. tech business that buys and sells used electronics, to source the e-waste for the car.

Liam Hopkins, left, and Aidan Gallagher stand next to the Recover-E race car.

(Envision Racing)

The e-waste car by the numbers The result is the Recover-E car, a full-size, drivable Formula E Gen3 car made entirely out of e-waste.

The car weighs 2,645 pounds and can reach speeds of up to 174 mph.

RECYCLE ELECTRONICS

STEPS TO SAFELY DISPOSE OF YOUR UNWANTED DEVICES

It is composed of various electronic products, such as laptops, keyboards, mice, phones, vapes, batteries, and wires.

The car demonstrates how e-waste can be repurposed and reused in a creative and innovative way, rather than being thrown away and forgotten.

The Recover-E race car (Envision Racing) MORE: THE WORLD’S FIRST ELECTRIC FLYING CRAFT IS SET FOR LIFT OFF How the e-waste car is a symbol of a larger campaign The Recover-E car is not just a car, but a symbol of a larger campaign to tackle e-waste and climate change.

The Recover-E campaign aims to increase awareness of the human impact of e-waste and the need to reuse and recycle old electrical products.

It also seeks to educate and encourage people to take action on e-waste and climate change, by making pledges, donating or selling their e-waste, and supporting renewable energy and electric mobility.

AIdan Gallagher, a NEP Goodwill Ambassador, drives the Recover-E race car.

(Envision Racing) MORE

REVOLUTIONARY FLYING SPORTS CAR COMPLETES ITS MAIDEN FLIGHT The Recover-E campaign activities and events The campaign has organized and participated in various activities and events to spread its message and reach a wider audience.

For example, it launched the Recover E-Waste to Race competition, inviting young people to create their own e-waste cars out of recycled electronic materials.

The winners were announced at the London ePrix, the final race of the 2022/23 Formula E season, where the Recover-E car was also unveiled.

Recover E-Waste race car (Envision Racing)

TOM BRADY APOLOGIZES TO F1 DRIVER FOR MISSING LAS VEGAS GRAND PRIX, PROVIDES WORDS OF ENCOURAGEMENT The car then traveled to COP28, the UN climate change conference, where it was displayed, attracting the attention of many onlookers.

The car will also head to Davos, Switzerland, in the New Year, to take the issue of e-waste to the global leaders and decision-makers.

A Recover E-Waste to Race competition (Envision Racing)

The campaign has achieved impressive results, such as generating 50,000 e-waste pledges and gaining recognition from influential figures and organizations.

Most importantly, the campaign has inspired and engaged young people and fans around the world to join the Race Against Climate Change and make a difference.

MORE: ELECTRIC CARGO BIKE AIMS TO REPLACE SUV How you can reduce and recycle

This is a good opportunity for you to think about your own e-waste and how you can reduce it and recycle it.

Here are some tips on how you can do that: Buy less and buy better.

Choose quality over quantity, and opt for durable and repairable products that last longer and have less environmental impact.

Sell or donate your old electronics .

Instead of throwing them away, you can sell them to a reputable company, or donate them to a charity or a school that can use them or refurbish them.

Recycle your e-waste properly.

Find out where and how you can recycle your e-waste in your area, and make sure you follow the instructions and guidelines.

You can also look for certified e-waste recyclers that ensure safe and responsible handling of e-waste.

Support renewable energy and electric mobility.

By switching to green energy and electric vehicles, you can reduce your carbon footprint and help fight climate change.

You can also support initiatives and policies that promote renewable energy and electric mobility.

MORE: HOW TO RECYCLE YOUR OLD ELECTRONICS INTO AMAZON GIFT CARDS Kurt’s key takeaways Envision Racing has created the world’s first Formula E car made entirely out of e-waste.

This car is part of a larger campaign to raise awareness and action on e-waste and climate change.

E-waste is a global challenge that affects the environment and society.

Hats off to the Envision Racing team for creating the Formula E-waste race car.

What a creative way to showcase the potential of recycling and reuse.

CLICK HERE TO GET

THE FOX NEWS APP

How do you feel about the e-waste problem, and what are some ways that you try to reduce, reuse, or recycle your old electronics?

Let us know by writing us at Cyberguy.com/Contact .

For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter .

Ask Kurt a question or let us know what stories you’d like us to cover .

Answers to the most asked CyberGuy questions: What is the best way to protect your Mac, Windows, iPhone and Android devices from getting hacked?

What is the best way to stay private, secure and anonymous while browsing the web?

How can I get rid of robocalls with apps and data-removal services?

CyberGuy Best Holiday Gift Guide Last-minute gifts for the holidays Best Holiday Week Deals EXTENDED

Best Cyber Week Laptop Deals Best gifts for women 2023 Best gifts for men 2023 22 best gifts for kids Best gifts for pets Copyright 2023 CyberGuy.com.

All rights reserved.

Kurt “CyberGuy” Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on “FOX & Friends.”

Got a tech question?

Get Kurt’s CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.

Give the gift of earbuds on sale at Amazon that will arrive before December 25.

Earbuds are a holiday gift-giving staple: one small package unlocks a new way for your loved ones to experience their favorite music and podcasts.

There’s still time to check these hot-ticket items off your list in time for December 25 (thank you, Prime) thanks to today’s sale on Beats earbuds at Amazon.

From Beats Studio Buds+ to Beats Fit Pro , Amazon is running a sale on some of Beats‘ most popular earbuds today, Dec. 22.

The sale comes just in time for last-minute holiday shopping and features savings of up to $90.

Meanwhile, Beats Studio Buds are at their lowest-ever price of $79.95.

When Beats released the Beats Studio Buds+ , they leveled up their own noise-canceling game.

These powerful earbuds offer 1.6 times more active noise-canceling than the original Beats Studio Buds and microphones that are three times the size, which helps better filter out background noise.

These earbuds also boast the most impressive battery life at 36 hours, making them the perfect pair of earbuds for travel.

If you need a gift for the fitness fanatic in your life, look no further than a pair of Beats PoweBeats Pro wireless earbuds .

They’re sweat- and water-resistant, making them the go-to choice for rigorous workouts.

Plus, secure-fit ear hooks help them stay in place even when you’re at your most active.

With surround-sound spatial audio and a 9-hour battery life, these earbuds will quickly become a part of your everyday routine.

At 47% off, Beats Studio Buds are available at their all-time low price of just $79.95 today at Amazon.

With an average price of $111.58 and a list price of $149.95, it’s a steal to score these earbuds for under $100.

While the Beats Studio Buds+ best this model when it comes to sophisticated specs, the original Beats Studio Buds are still quality noise-canceling earbuds that offer value — especially at this price.

If comfort is your number-one priority, look to Beats Fit Pro .

These earbuds share Beats‘ signature true wireless noise-canceling technology and spatial audio capabilities while also touting a design that enhances overall fit and comfort.

The secure-fit wingtips on each earbud adapt to fit your ears, giving your earbuds more stability, while soft, customizable, ear tips ensure your earbuds are just the right size for you.

Source: mashable

No Comments

Leave a comment

Your email address will not be published. Required fields are marked *