Back Print Video By Sagar Meghani – Associated Press – Friday, December 22, 2023
As children around the world eagerly await Santaâs arrival on Christmas, the military is ready to track him and see if heâs using any new technology.
Armed with radars, sensors and aircraft, the North American Aerospace Defense Command in Colorado keeps a close watch on Santa and his sleigh from the moment he leaves the North Pole.
And it once again will share all those details so everyone can follow along as Santa travels the globe beginning Christmas Eve.
NORAD, the military command that is responsible for protecting North American airspace, has launched its noradsanta.org website, social media sites and mobile app, loaded with games, movies, books and music.
And thereâs a countdown clock showing when the official tracking of the sleigh will start.
The military will track Santa with, âthe same technology we use every single day to keep North America safe,â said U.S. Air Force Col. Elizabeth Mathias, NORADâs chief spokesperson.
âWeâre able to follow the light from Rudolphâs red nose.â
Mathias says while NORAD has a good intelligence assessment of his sleighâs capabilities, Santa does not file a flight plan and may have some high-tech secrets up his red sleeve this year to help guide his travels – maybe even artificial intelligence.
âI donât know yet if heâs using AI,â said Mathias.
âIâll be curious to see if our assessment of his flight this year shows us some advanced capabilities.â
PHOTOS: Military command ready to track Santa, and everyone can follow along The tracking Santa tradition began in 1955, when Air Force Col. Harry Shoup – the commander on duty at the NORADâs predecessor, the Continental Air Defense Command – fielded a call from a child who dialed a misprinted telephone number in a newspaper department store ad, thinking she was calling Santa.
A fast-thinking Shoup quickly assured his caller he was Santa, and as more calls came in, he assigned a duty officer to keep answering.
And the tradition began.
NORAD expects some 1,100 volunteers to help answer calls this year in a dedicated operations center at Peterson Space Force Base, in Colorado Springs, ranging from command staff to people from around the world.
âItâs a bit of a bucket list item for some folks,â says Mathias, calling the operations center âdefinitely the most festive place to be on December 24th.â
The operations center starts up at 4 a.m., MTS, on Christmas Eve and is open until midnight .
Anyone can call 1-877 HI-NORAD (1-877-446-6723) to talk directly to NORAD staff members who will provide updates on Santaâs exact location.
Copyright © 2023
The Washington Times, LLC.
Please read our comment policy before commenting.
Click to Read More and View Comments Click to Hide
The holiday season is one of the best times to shop for a new TV.
While you can generally find great TV deals throughout the year, the prices we’re seeing right now will likely not resurface until 2024’s first major retail holiday in February.
So if you’ve been eyeing a new TV as a gift or for yourself, now’s the time to shop.
When it comes to pricing, it’s hard to beat Best Buy.
The retailer currently has smart TVs on sale from $64.99 .
The retailer also has 4K 50-inch+ TVs on sale from $199.
Below I’ve rounded up seven of the best TV deals you can get with expedited holiday shipping.
Note that delivery may vary based on your location.
However, many of these TVs can also be purchased online and picked up in-store.
Not sure which TV is right for you?
Make sure to check out our guide to the best TVs of 2023.
Alternatively, you can browse our guide to the five best OLED TV deals this holiday .
Best Buy Best Buy has smart TVs on sale for as low as $64.
Keep in mind, the cheap TVs tend to be smaller, 1080p models (which are more suitable for a children’s room or guest room).
However, the sale also includes larger sets.
These are among the cheapest TVs we’ve seen from Best Buy.
By comparison, Amazon is offering a similar sale with prices from $64.
Price check: from $64 @ Amazon | from $88 @
Walmart TCL 55″ S4 S-Class 4K TV: was $299 now $269 @
Best Buy The S4 S-Class is one of TCL’s new budget TVs.
Yet despite its budget friendly price, it packs Dolby Vision/HDR10/HLG support, DTS Virtual:X audio, built-in Chromecast, and Google TV Smart OS.
You also get three HDMI ports, including one with eARC support.
Price check: sold out @
Amazon Roku 55″ Select Series 4K TV: was $349 now $299 @
Best Buy The Roku Select is part of Roku’s new line of TVs made in-house.
It features HDR 10 Plus/HLG support, Apple HomeKit/Alexa/Google Assistant compatibility, and four HDMI ports.
It also comes with Roku’s platform for all your streaming needs.
Sold exclusively at Best Buy, it’s on sale at its lowest price ever.
TCL 55″ Q6 4K QLED TV: was $499 now $348 @
Amazon
The new TCL Q6 4K QLED TV is a budget TV with plenty of great features.
It offers Dolby Vision/HDR10+/HDR10/HLG support, DTS Virtual: X audio, built-in Chromecast, and Amazon Alexa/Google Assistant compatibility.
Although the display is just 60Hz natively, Game Accelerator 120 allows for 120Hz VRR at a lower resolution.
You also get Dolby Atmos and eARC support.
Price check: $348 @ Walmart | $349 @
Best Buy Hisense 65″ U6 Mini-LED ULED 4K TV: was $799 now $549 @
Amazon Hisense’s proprietary ULED technology is a step up from normal LED-based LCD TVs and offers enhanced color and overall better picture quality.
This Mini-LED QLED TV also features Dolby Vision/HDR10+/HDR10/HLG support, and built-in Google Assistant.
This is the cheapest price we’ve ever seen for this particular model.
Price check: $549 @ Best Buy | $548 @ Walmart Sony Bravia 55″ X80K 4K TV: was $699 now $578 @
Amazon The Sony Bravia X80K TV was already an excellent entry-level model, and at this discounted price, it’s now an even better value for money.
It packs solid picture quality with low input lag and an excellent Google TV interface.
It’s not an audio powerhouse, but it’s an excellent pick if you’re looking for a large TV at a relatively modest price.
In fact, we named it one of the best budget TVs you can buy right now.
Price check: $579 @ Best Buy Samsung 65″ Crystal 4K TV: was $627 now $579 @
Amazon The CU8000 is one of Samsung‘s budget 4K TVs.
However, despite its price it packs a lot of modern features, including HDR10+/HLG support and Alexa/Google Assistant/SmartThings compatibility.
It also uses Samsung‘s own Tizen operating system.
Price check: was $579 @ Best Buy
We use our smartphones for everything these days.
From texting to photography, to web browsing and online gaming, thereâs really nothing an iPhone or Android canât handle.
That is, until your phoneâs internal storage is depleted.
When we refer to smartphone storage, weâre talking about the onboard gigabytes a phone uses to run applications, store photos, videos, music, and documents, install firmware and OS upgrades, and more.
If youâve ever purchased a new phone, youâve probably seen numbers like 32GB, 64GB, and 128GB featured alongside the model of the device youâre looking at.
These figures represent the amount of data storage that the phoneâs manufacturer has included on the phone itself.
So when it comes time to buy a new iPhone or Galaxy smartphone, exactly how much storage space does the average person need?
And are there ways to add additional storage when and if you run out?
Weâve put together this helpful explainer to answer these questions and more.
When we think of the chart-topping amounts of RAM and internal storage that modern smartphones are known for, it may surprise you to learn that some of the earliest web-connected phones only contained a single gigabyte or less of storage space.
But as technology has evolved over the years, companies like Apple and Samsung have had to increase the amount of RAM and storage space available on smartphones.
Believe it or not, our smartphones actually utilize a portion of whatever internal storage is advertised to users.
This is because it takes more than just a solid processor and plenty of RAM to run mobile operating systems like iOS and Android .
These pre-installed platforms need a place to store and access crucial data for running your phoneâs OS and whatever apps and software your device is running.
If youâve ever seen a breakdown of your iPhone or Samsung Galaxy deviceâs internal storage, youâve probably seen that a decent chunk of storage (usually somewhere between 5GB up to 10GB) is allocated to the phoneâs OS.
And no matter how good you are at freeing up storage space on your phone (weâll cover this more in a bit), youâll never be able to reclaim the internal data your phone takes for itself.
Now that weâve cemented the hard truth that iOS and Android are always going to utilize some of our phoneâs advertised bytes, the data thatâs left is taken up by whatever we, the users, decide to use our smartphones for.
For the average user, one major source of storage space usage is how many apps you have downloaded on your device, and what types of apps these are.
Considering thereâs an app for just about everything these days, these mobile-friendly bundles of software are meant to support every kind of want or need.
If youâre a music fanatic, you may have a number of music-streaming platforms and music publication apps installed on your phone.
If you prefer using your smartphone to balance your budget, youâll probably have a number of banking and credit card apps downloaded.
Then thereâs all the social media apps and messaging platforms that we use to stay in touch with friends, family, and colleagues.
Some apps take up very little storage space, while others consume a whole bunch of your bytes, especially when you start getting into mobile games (which are also considered apps), editing workflow tools (think Adobe Creative Suite), and weather monitoring.
Yes, youâd be surprised how much data the Weather Channel app can use.
Another huge culprit when it comes to storage space harvesting is general media.
Photos, videos, and music are nice to look at and listen to, but by golly can they take up a ton of space on our smartphones!
To put things into perspective, Apple claims that a one-hour 1080p/30fps video, encoded as an h.264 file, takes up about 7.6GB on the average iPhone.
If you boost things up to 4K/30fps, you can expect to lose around 21GB.
As for Android smartphones like the Samsung Galaxy S23 , Galaxy S23+ , or Galaxy S23 Ultra , Samsung claims that recording in 1080p (frame rate unspecified) for up to one minute uses around 100MB of data, and up to 300MB for 4K. That translates to about 6GB of data per hour in HD, and 18GB for 4K footage.
Fortunately, companies like Apple and Samsung know how much data your photos and videos take up.
Appleâs solution was the introduction of a new encoding called High Efficiency Video Coding (HEVC) with iOS 11 , which consumes about half as much data as an h.264 file of the same name.
And as for Samsung, many of the brandâs Galaxy smartphones allow for up to 25GB of expandable storage via microSD (more on this in a minute).
If youâre in the market for a new iPhone or Samsung Galaxy smartphone, you may be wondering how much storage space you should be shopping for?
Thatâs all going to depend on how you plan on using your smartphone.
To make things simpler, letâs look at a few different smartphone user types to better understand how much storage you should be seeking.
Youâre the type of person who doesnât let your phone run your life (good for you!).
Of course, itâs always a good idea to have your device handy in case of emergencies.
You also enjoy using your phone to search the web at the doctorâs office, and you also like snapping a few photos at the occasional family gathering.
And other than using your smartphone to set an alarm or two, thatâs pretty much the extent of your device usage.
If this sounds like you, youâre probably not going to need very much storage space at all.
Itâs always a good idea to go with the newest tech though, so if youâre Team iPhone, youâd probably want to check out the 128GB version of the iPhone 15 .
For Android fans, the 128GB version of the Samsung Galaxy S23 should be more than sufficient.
We expect most mid-range phones to offer 128GB as the base storage option, but it’s unacceptable for flagship caliber phones in this day and age with 128GB of starting storage.
You like being on your phone, and you like using it for everything.
Youâve always got a decent handful of apps downloaded to your smartphone, and a couple of them are usually high-performance games.
You also enjoy posting to social media on a near-daily basis, and youâll snap a photo or video of just about anything cool you see out in the world.
Weâll call this the middle-ground between âa littleâ and âa lotâ of needed storage space.
This type of user should look at the 256GB version of the iPhone 15, or maybe even the iPhone 15 Plus .
And over in Android Land, this user type should also invest in 256GB for the Samsung Galaxy S23, or the Samsung Galaxy S23+ .
Youâre on your phone so much that you could be a phone (imagine that).
You typically download every single app youâre even mildly interested in, and youâre all about cutting-edge mobile gaming.
When it comes to social media, youâre posting every hour, to the point where your family wonders how you manage to have a life outside your screens.
You also like to take photos and videos whenever you get the chance, and you always export and save in the highest resolution possible.
This type of user needs as much storage space as possible, and a top-shelf phone to go along with the big bytes.
If youâre thinking of going with an iPhone, weâd urge you to consider the 512GB version of the iPhone 15 Plus.
For Android users, weâd recommend going with the 512GB version of the Samsung Galaxy S23+, or even the 1TB version of the Galaxy S23 Ultra .
Even if youâve made the most data-conscientious buying decision you can for your smartphone, you may still run into situations where your storage space is nearing capacity.
Not to worry though, for thereâs a number of things you can do to increase and preserve storage space on your mobile device.
First and foremost, we recommend looking into external storage options.
Traditionally, iPhones have never had built-in microSD card slots, but thatâs not the case for Android powered smartphones.
If youâre running out of bytes for your go-to Android, you can always toss in a microSD card (assuming it’s offered), which should give you another 200GB or so.
You can also invest in a smartphone-friendly flash drive if you want to offload some of your existing media to free up space.
Cloud storage platforms are another excellent way to free up storage space.
While a majority of these platforms require some kind of monthly subscription, the most basic tiers of storage are generally inexpensive.
For instance, you can add up to 50GB of iCloud storage to your iPhone for as little as $1 per month.
You can also offload media and other files using tools like Google Drive and Dropbox .
Clearing app data and old downloads is another great way to free up memory.
This is usually as simple as heading into your phoneâs settings and searching for a storage dropdown.
From here, you should be able to see how much data is being split between apps, media, and your phoneâs OS.
If youâre using an iOS device, your iPhone will even give you the option of deleting old downloads, texts, and other content to add back some internal storage.
If youâre not overly concerned with every single photo and video looking crystal-clear, you can always choose to export media in lower-res formats.
And as for music, instead of downloading tracks and albums to your device, you can always opt for music-streaming services instead (although this will require an internet connection or cellular data).
However you choose to free up space on your iPhone or Samsung Galaxy device, we canât emphasize the following enough: make sure you have enough available storage for software updates.
Every web-connected smartphone receives routine software patches from its manufacturer.
This ensures that your phone is running as efficiently as possible, while clearing up any bugs or glitches that may have impacted previous versions of the OS.
Some software updates may even allow your smartphone to use less internal storage to run apps and store media.
Smartphones are constantly evolving, which means that internal storage on newer models should continue to increase.
We wouldnât be surprised to see 512GB as the base offering for a new iPhone or Samsung Galaxy device five years down the line, especially when you factor in new resolutions, frame rates, and OS processes.
And even if you donât think youâll need a ton of data for your apps and media, itâs never a bad idea to invest in extra storage, especially if the difference between 128GB and 256GB is only another hundred dollars or so.
While many new smartphones are being offered with 128GB as the base storage option, some of the most hands-off users may be able to get away with 64GB or less.
It may be hard to find these minimal byte quantities on newer phones from the likes of Apple and Samsung though.
Deleting unused apps and media, offloading content to external devices and cloud platforms, and choosing lower export settings are a few ways you can free up storage space on your smartphone.
Itâs also a good idea to occasionally check your phoneâs storage settings, in order to get a better sense of what apps and content are using the most data.
Yes.
Your smartphoneâs operating system and apps use a certain amount of non-RAM memory to download, install, run, and update.
In the event that you start to run out of storage space, certain app features may stop working.
If all your internal storage is depleted, you also wonât be able to download software updates for your phoneâs OS.
As many security patches are made through software updates, not having enough data to download these updates could be damaging to your device and user data down the line.
Here’s a simple truth: where thereâs a Dyson product , thereâs a dupe, and none are more popular than the best Dyson Supersonic dupes.
When the brand first released the Supersonic hair dryer in 2016, it reinvented the blow dryer.
Seven years later, it remains the best hair dryer in the world, and itâs clear that Dyson’s beauty tech bet was more than a success.
While itâs an impressive product, you don’t need to pay upwards of $400 to get the home hair dry experience you desire.
The Supersonic first made waves because, in typical Dyson fashion, it gave an upgraded, futuristic design to an everyday product.
Dyson is known for this aesthetic, but the sleek look also has a function.
The ring-shaped head ditches the vented and coiled model of the traditional hair dryer and houses the tiny V9 motor in the handle.
The combination of the V9 motor and the Air Multiplier technology makes for a dryer that clocks in at only 1.8 pounds, yet still delivers a powerful airflow thatâs notably quieter than the roar of a traditional dryer.
To minimize damage, the Supersonic also measures the air temp up to 20 times per second and uses a built-in ionizer to minimize static and give the hair a sleek finish, which brings us to a quick ionizer science lesson.
Ionizers are pretty common in higher-end air dryers.
Why?
Most work by blowing negative ions at wet hair to reduce static electricity by sealing the hair cuticle and taking down the power of that positive ionic charge (aka what’s causing that annoying frizz).
As negative ions make contact with hair, they’re also dispersing the positive ions of water, therefore cutting down on your drying time and reducing damage in the process.
Basically, it’s one of the reasons the Dyson Supersonic provides such quick and excellent results, and why hair dryers with ionizers will cost you more money â they do more than simply dry the hair.
Magnetic attachments designed to easily snap onto the blow dryer round out the futuristic feel of the Supersonic, with five included â a styling concentrator, a flyaway attachment, a diffuser, a gentle air attachment, and a wide tooth comb.
It’s a nice array of included nozzles even for high-end dryers, which might typically include three to four attachments at the most.
At $429, the overall package of the Supersonic is definitely an investment.
However, you’re paying for a high-end motor that’s built to last, multiple heat settings to protect hair, an innovative design, and of course, the ionic tech.
Other dryers from popular hot tool brands like T3, ghd, and Harry Josh boasting some similar features will run you anywhere from $150 to $350, but none quite capture the complete offerings of the Supersonic.
When I tested the Supersonic myself , I found that it had a luxe feel that still makes it stand out from other hair dryers.
Dyson also released an “affordable” version of the Supersonic, called the Supersonic Origin, earlier this year that retails for $399.99.
At only about $30 cheaper, I think the price-to-feature ratio is actually a much worse value than just going for the regular Supersonic, unless you can grab the Origin on sale.
At the same time, there are dupes out there that deliver similar features and elements of the performance at a much lower price.
There are a lot of options for luxury blow dryers out there and a lot of dupes that attempt to look like the Dyson but skimp out on quality.
While itâs not entirely feasible to find an exact one-to-one alternative for a fraction of the price, it is possible to find Supersonic dupes youâre more than satisfied with.
The trick is to identify what exactly draws you to the Supersonic in the first place.
For a deeper dive on how each of these blow dryers performed and info on where to buy them, read on for the best Dyson Supersonic alternatives â all tested by the Mashable team.
TL;DR: Through Dec. 25, get Pixilio The Ultimate AI Image Generator for just $19.97 â you’ll save 94%.
Everyone has that hard-to-buy-for loved one on their gifting list.
With time running out, you may be looking for a last-minute present that won’t require shipping but will still wow under the Christmas tree.
And if you happen to know someone who loves technology and/or content creation, or is simply intrigued by artificial intelligence, we have a great digital option that can be sent instantly to your inbox .
With Pixilio, you or your loved one can create images with the power of AI in seconds .
The possibilities are endless, as it can be used for work purposes or just for fun, and you can take advantage of holiday savings and gift them a lifetime subscription for only $19.97.
That’s $340 off the usual price as long as you purchase before December 25.
With this lifetime subscription to Pixilio The Ultimate AI Image Generator, your loved ones can tap into their creative side and bring anything in their imagination to life.
All they have to do is input the ideas they have, sit back, and watch what Pixilio whips up in just a few seconds.
Everything created will be considered original content that is 100% owned by them , so they can use it anywhere from marketing campaigns to social media to websites to art to hang in their home.
This lifetime subscription to Pixilio means the possibilities are endless, as they’ll have a lifetime to create stunning images generated by artificial intelligence with their exact parameters.
There’s no prior graphic design experience needed, all they have to bring to this tool is their unique ideas and concepts.
Give a unique gift with this lifetime subscription to Pixilio AI Image Generator , now just $19.97 (reg.
$360) until December 25 at 11:59 p.m. PT.
StackSocial prices subject to change.
Apple, as you may know by now, is at the heart of a health technology patents dispute with Masimo, a medical technology companyâfull details .
It centers on the blood oxygen monitoring capability found in the Apple Watch Series 9 and Apple Watch Ultra 2 smartwatches.
Along with these being removed from sale, more details have now emergedâand theyâre not good news.
Apple Watch Ultra 2, left, and Apple Watch Series 9.
As promised, Apple removed Apple Watch Series 9 and Apple Watch Ultra 2 from sale on its website.
Instore sales continue until end of business Sunday, December 24.
And you can also buy these models from other retailers, such as Amazon, Best Buy and Target, while stocks last.
The Apple Watch SE does not have blood oxygen measuring capabilities so is still on sale.
Apple Watch Nike and Apple Watch Hermes are marked as currently unavailable on apple.com.
The ban is limited to the U.S., so sales outside the States continue as normal.
But the new details are subtle and, frankly, annoying.
According to , âAppleâs customer service teams were informed in a memo that the company will no longer replace out-of-warranty models going back to Apple Watch Series 6.
That means if a customer has a broken screen, for instance, they wonât be able to get the issue fixed by Apple.
The company will still offer help that can be done via software, such as reinstalling the operating system.â
This is a nasty sting in the tail.
Since some hardware issues routinely led to replacements rather than repairs, affected customers will now be told that, âthey will be contacted when hardware replacements are allowed again,â Gurman says.
Until now, it had seemed that anyone who owned a Series 6, Series 7, Series 8, or Ultra would be blithely unaffected by the ban.
Now, only those whose Watches are still under warranty, can rest easy.
That means Series 8 and Ultra purchases made within the last 12 months, plus all Series 9 and Ultra 2 models, as these first went on sale on September 22 this year.
If your Watch is still under warranty, the replacement ban âarenât affected by the replacement prohibition,â according to Bloomberg.
But there are still concerns, even if youâre buying the Watch instore today, for instance.
Supposing youâre gifting the Watch to someone and they would rather have a different color.
Bad luck, it seems.
Gurman explains, âAfter Dec. 25, Apple also wonât be able to exchange a watch purchased before the ban, say for a different color or size, during the typical return period.
Retail staff was told a product swap wonât be allowed, but Apple will replace accessories like bands.
Watches can still be returned for a refund.â
Apple Watch Series 9 and Apple Watch Ultra 2 are stunningly good products, way ahead of the smartwatch competition.
But right now, these restrictions make them tricky to recommend, at least in the U.S.
Like a circus strongman’s barbells, 2023’s year in VR has been heavily weighted toward either end.
The last twelve months were bookended with the launch of two excellent VR headsets (although only one is truly relevant to us PC gamers) and a whole bunch of great VR titles.
The summer, however, was the quietest I’ve known for a long time, to the point where I ended up playing two VR minigolf games in the absence of anything better to do.
Fortunately, the last few months have made up for it, providing enough fantastic VR games to feed us well into next year.
So it’s an unevenly weighted set of barbells, the kind that would give our moustachioed muscleman a slipped disk.
It’s also been a year of interesting shifts.
What was proclaimed the inevitable future of VR eighteen months ago â the Metaverse â is now dead in the water.
Or at least, it’s crawling weakly toward the ocean while our hero follows slightly behind, almost pitying the poor wretch, but still intent on drowning the malignant charlatan in the shallows.
Meanwhile, the headsets that have dominated this year point to the growing platform-based nature of VR.
Where a few years ago you only needed a headset and a PC to access the full suite of VR games, now you need the right headsets (and a PC).
It’s a trend that could have significant ramifications for PCVR in the future, although we’re not there quite yet.
This was handily demonstrated at 2023’s starting line by the launch of the PSVR2, Sony’s follow-up to its previous headset that’s designed to take advantage of the power of the PS5.
Whereas the original PSVR was a bit too weedy to draw PCVR gamers away from the Rifts and the Vives (which sounds like a 1950s rock ‘n’ roll band) the PSVR2 is a cutting-edge headset, and brought some tasty exclusives to boot.
Most notable were Horizon: Call of the Mountain, and an exclusive VR version of Resident Evil: Village.
Although, the one that really sticks in the craw is the recently released VR version of Resident Evil 4 Remake .
There are now two VR versions of Resident Evil 4, and you can’t play either of them on PC.
Rotters.
Meanwhile, Meta has spent the last few years buying up VR developers and signing either outright or timed exclusives its Quest headsets, meaning PCVR gamers have had to wait for some of its biggest hitters to land.
And even then, some of them landed belly first, such as The Walking Dead: Saints And Sinners Chapter 2, a disappointing sequel to one of the best VR games around.
There were some fun highlights from smaller developers in the spring, like the VR rogue-like The Light Brigade , and the charming perspective-based adventure Another Fisherman’s Tale , but by late summer PCVR‘s cupboard was starting to look pretty bare.
It was rumoured that the reason for Meta’s purchasing spree was so that those developers could make content for its Metaverse platform Horizon Worlds.
The Metaverse dominated headlines last year and was destined to be the future of VR, if you believed Mark Zuckerberg’s attempt to mimic the human emotion “enthusiasm”.
But the prospects for Horizon seemed flimsy from the start, and while Horizon Worlds is still theoretically in development, with Meta still pouring billions of dollars into it, the Metaverse has had its lunch thoroughly eaten by this year’s technological fad, AI.
As noted by the Verge , at this year’s Meta Connect, Horizon Worlds received significantly less attention than Meta’s AI initiatives, like a virtual assistant that can be integrated into chats across the company’s messaging platforms.
Oh well, at least everyone in Horizon Worlds has legs now .
Yet while Meta’s vision for the Metaverse remains more watery than American tea, the company’s still right at the forefront of VR headset design.
In October, Meta launched the Meta Quest 3, which offers a substantial hardware upgrade from the Quest, alongside improved passthrough and mixed reality capabilities.
The higher price and the lack of decent launch titles put me on the fence about whether it was worth the upgrade, but now it has a slew of games like Assassin’s Creed Nexus , Samba Di Amigo: Party Central, Lego Bricktales, and Asgard’s Wrath 2, it’s a much more viable proposition.
Indeed, VR gaming has saved all the good stuff for the last three months.
Alongside the Quest 3’s slightly-after-launch titles, we’ve seen PCVR releases for Arizona Sunshine 2, Five Nights at Freddy’s: Help Wanted 2 (which, like it or not, is a massive VR game), and the wonderful remake of ye-olde spooky puzzler the 7th Guest.
If you own a Quest 2 or 3, you’ll have access to nearly all of this because of Quest Link, making it one hell of a Christmas list for VR fans.
In short, 2023 has gone from being a pretty dry affair in VR land, to a bit of a bumper year right at the death.
But the quality of your harvest depends heavily on which headset you’ve got, and the big question going forward is how much further the existing VR platforms are likely to drift apart.
Right now, if you own a Quest headset, then you can access both the Quest exclusive library, and anything PCVR via Quest Link and Air Link.
But will that remain the case?
Meta has been pushing to make the Quest its own platform for years now.
At what point does the Quest sever its connection with the PC entirely?
I think this may well happen eventually, but it’s unlikely in the short term.
Alongside its baseline audience, the Quest also has a huge user base on Steam.
In fact, according to Steam’s own hardware survey , it’s by far the most popular PCVR headset, accounting for 40% of VR users between June and November 2023 (the Valve Index, by comparison, is the second most popular at 19%.)
Letting the Quest access Steam gives the headset substantially more functionality, and there’s no logical reason why Meta would want to stop that provided it can still sell exclusives on its own store.
And while Valve has its own headset it wants to sell, it also seems to understand the importance of Meta’s continuing presence in the PCVR scene.
Earlier this month, Valve partnered with Meta to update its Steam Link application to stream to Quest 2 and 3, a sort of reverse system to Quest Link and Air Link.
As well as providing further PCVR functionality for Quest, it points to a continuing overlap between Meta headsets and PCVR.
As for what else the future holds, the short answer is: more headsets and more games.
It’s likely we’ll see Apple‘s Vision Pro next year, possibly as early as January.
Apple‘s headset is unlikely to have much relevance in these here parts, but it’ll be interesting to see the response to it nonetheless.
More significant is Valve’s mysterious “Deckard” project, purportedly some kind of follow-up to the Valve Index.
There are no firm details about Deckard, although some cryptic word uttered to James by Valve designer Lawrence Yang hinted it might take inspiration from Valve’s work on the Steam Deck.
“Just like Steam Deck is learning a bunch of stuff from controllers and VR, future products will continue to learn from everything we’ve done with Steam Deck.”
My money’s on a Valve equivalent of the Quest, a standalone, inside-out tracking headset that can connect wirelessly to your PC (which, incidentally, Steam Link would be ideal for).
But let’s close out 2023 with a quick glance at what we’ll be playing next year.
Titles on the cards for 2024 include Bulletstorm VR , the cyberpunk detective sim Low-Fi , and the giant-smashing adventure Behemoth , from the creators of Saints And Sinners.
But by far the most exciting VR prospect for next year is Taskmaster VR .
Getting chewed out by a virtual Greg Davies for playing VR Jenga badly?
That alone is worth spend ÂŁ500+ on some fancy goggles for your face.
and labor shortages worldwide threaten crop yields and food security, and ESG expectations add an extra layer of complexity for food producersâeven in countries with abundant natural resources.
Brazil offers a valuable example of this.
Its land and climate, combined with some determined long-term vision and ambition, have made it a in a host of food and agricultural commodities.
However, even with all its natural advantages, Brazil is facing serious challenges.
Along with climate change and , itâs also encountering , changing consumer demands and demand volatility.
A particular challenge for Brazil is the deforestation of the Amazon, which attracts negative global attention.
Together, these factors pose a threat to Brazilâs sustained growth.
The Agri-Technologies Of Brazil Eager to continue their market expansion yet facing the same ESG and environmental challenges as other markets, Brazilian agribusinesses increasingly are turning to digital and genomic innovations to tackle these crucial challenges while heeding national and international calls to improve the sustainability of their operations.
Brazil now boasts more than startups that are developing innovative blockchain, artificial intelligence and drone technology looking for ways to improve competitiveness.
The big difference is that whereas in the past, the goal was simply to enhance productivity, the current generation of innovators offers ESG and sustainability features to the mix.
Here are examples of some of the technologies these innovators are using to get there.
Blockchain startups can improve traceability, efficiency and sustainability in the food supply chain.
They also provide the information customers, regulatory authorities and other actors in the food chain need to increase the transparency of ESG initiatives in Brazilâs food and agriculture supply chains.
Artificial intelligence (AI) is being applied to .
AI is also being combined with , allowing for better decision-making on the farm amid the climate and labor challenges.
This technology is helping Brazil advance sustainability practices in agriculture by optimizing water, fertilizer and crop protection, reducing waste and minimizing environmental impacts.
Even in a country that traditionally has benefited from cheap labor, recent worker shortages in the food chain increase the value of AI automation on farms and in food factories.
Drones are increasingly common on Brazilian farms.
The ability of drones in Brazil to collect information remotely is particularly critical in a country where the road infrastructure is challenging and where auditing sustainability metrics is increasingly crucial to the international customers of Brazilâs produce.
Lessons For Global Agtechs Food producers and farmers of the world should take note.
Home to some of the most productive land in the world, Brazilâs agricultural advancements arenât just a boon for Brazilâthey also contribute to global food security.
Traditionally, Brazil has focused almost exclusively on productivity for productivityâs sake, keeping the price down and the volume up.
However, todayâs global âthey also want assurance that the food is safe and has been sustainably produced.
Ultimately, Brazil serves as a model for other countriesâparticularly in Latin America and Africaâin how to develop sustainable, efficient and technologically advanced agriculture.
More importantly, it shows that agri-technology innovators worldwide can succeed by addressing more than productivity and performance: Developing tech that helps make the planet a better place to live can be a net positive, not just another challenge.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
The aviation industry, a cornerstone of the global economy, facilitates connections between people and businesses across continents.
Yet, it grapples with a critical issue: a shortage of skilled workers, ranging from .
One important way to address this challenge is for the aviation industry to recognize the untapped potential of skilled immigrants who can contribute invaluable knowledge and experience to the field.
However, the journey for immigrant workers in the aviation sector is fraught with challenges, some of which I have personally encountered.
In this article, I’ll explore the obstacles they confront and how their integration can yield mutual benefits for the industry and the broader economy.
Challenges Faced By The Immigrant Workforce Skilled immigrants have a diverse range of qualifications and expertise, often aligning with the demands of the labor market.
Unfortunately, many times, these workers land in not relevant to their expertise.
This misalignment not only impacts immigrants’ financial stability but also stifles the broader economy as valuable skills remain underutilized.
Drawing from my personal experience as an immigrant within the aviation industry, cultural and social assimilation issues loom large.
Earlier in my career, I was also rejected from positions in the aviation industry multiple times that were relevant to my experience due to my nationality.
One major concern for the aviation industry, in particular, is that qualified immigrant workers may face challenges when transferring the certifications and experience that demonstrate their qualifications .
Pilots have also faced , according to .
How Companies Can Help Immigrants Navigate The Industry
Navigating societal and workforce-entry challenges demands a multifaceted approach.
To overcome these barriers, the aviation industry must provide essential resources.
Tailored language programs can empower immigrants to overcome language barriers and enhance their professional opportunities.
Offering cultural sensitivity training ensures a smoother integration process, fostering a workplace environment that values diversity.
Industry-specific certifications can validate immigrants’ qualifications, ensuring their skills are acknowledged and utilized appropriately.
Moreover, addressing the aviation industry’s labor shortage requires targeted job training, networking opportunities and technology adoption.
For instance, job training initiatives should be customized to meet the unique needs of immigrant workers.
Technology also can play a large role here, especially large language models and other tools that can for non-native speakers.
The Role Of Government And Industry Realizing the full potential of skilled immigrants in the aviation industry requires concerted efforts from government and industry stakeholders.
For example, policies that facilitate the recognition of foreign qualifications are essential.
Industry leaders, for their part, must proactively cultivate an inclusive work environment that appreciates diversity and leverages the unique skills immigrants bring to the table.
Beyond supporting these workers, workforce diversity is closely linked to , making it a crucial aspect of a thriving aviation industry.
Conclusion Embracing skilled immigrants in the aviation industry is not merely a solution to the labor shortage; it’s a pathway to substantial economic benefits.
A growing and productive labor force will enable the industry to meet the and related services and can foster increased economic growth and job creation.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Founder & CEO, .
Data is everything in modern healthcare.
Those who know how to operate it efficiently and securelyâespecially when it “travels”âhave the best chance for success.
Introduced in 2012, Fast Health Interoperability Resources ( ) has become an omnipresent solution that facilitates data interoperability and security compliance.
Yet, there are too many things to understand about FHIR and how to introduce it into a company’s operational ecosystem.
Establishing an FHIR facade might be the most appropriate starting point.
However, there are also copious questions regarding its benefits and operational technicalities.
In this article, I will explore whether an FHIR facade is the solution you need when considering migrating to FHIR.
Do Not Confuse FHIR Facade With FHIR Repository Healthcare applications send and receive data in various formats; the underlying models differ to a certain degree.
Yet, standardizing the incoming and outgoing data formats is the linchpin of modern healthcare industry interoperability FHIR is the standard that ensures standardized electronic healthcare information exchange.
However, many healthcare organizations need help switching to an FHIR-compliant data exchange system because they find themselves confusedâforced to choose between an FHIR facade and an FHIR repository, thinking that one option is somehow better.
Because of this confusion, letâs look at the difference between an FHIR facade and an FHIR repository, expanding on the most appropriate scenarios for choosing each option.
FHIR Facade Vs.
FHIR Repository The core difference between the two approaches is where they store the data, as by Smile Digital Health.
When opting for a facade (HAPI FHIR Plain Server or Smile Digital Healthâs Hybrid Provider), you can work with data that remains in its source system.
Data queries are mapped to FHIR on the spot and delivered in real time from the data sources.
With FHIR facade: You can use an existing database, and no major technology revamp is necessary.
Nonetheless, the simplicity of having a single source of truth for data remains.
Your existing data-mapping scheme remains while you get the ability to preview the FHIR-based data output.
In the case of the repository model (HAPI FHIRâs JPA Server or Smileâs CDR), a separate asset for data copy storage is requiredâa so-called native FHIR clinical data storage or repository.
This strategy allows the data to be converted to the FHIR standard before any queries, which, in turn, allows for a connection to the newly built repository instead of mapping the backend data sources.
Here is the repository model in a nutshell.
This helps minimize data exposure by limiting the very date in the repository.
Smileâs CDR can help you maintain the extensive data set and keep it compliant with any upcoming FHIR compliance requirements.
It will take a lot of time to copy all the organizationâs data into the repository.
FHIR Facade Implementation Options
So, what are the best options for implementing an FHIR facade into your organizationâs operations?
Here are the two options to consider.
You will get a fully functional FHIR facade, but this approach requires an in-depth preliminary audit of your existing system to map your internal data model into the FHIR facade successfully.
Using a generic FHIR server for storing and managing data that will run through your API is a good idea.
You will still need to map your data, but as soon as you synchronize the data on your inner server with the FHIR server, that should suffice.
Pros And Cons Of An FHIR Facade
So, is an FHIR facade a friend or foe of a healthcare business?
Here are the pros and cons.
Establishing an FHIR facade is the easiest way to reach FHIR compatibility.
It is not solely a legal standard but a technology one that refines business proceedings with a single external API.
That lets your organization stay agile and up-to-date with the latest standards.
Working with a single external API saves your legacy backend from needing tedious updates every time a new data compliance standard appears in healthcare.
As a result, you get to cut costs on system support and maintenance.
With the FHIR facade, you can also enjoy a unified view of healthcare records with no further predicaments, as it can be connected to several backend systems simultaneously.
Imagine having several versatile backend systems as an integrated database with a unified interface.
Despite the many benefits, there are also significant disadvantages associated with introducing FHIR into your business processes.
First and foremost, legacy systems stand in the wayâmapping the data within a legacy system to an FHIR âwrapperâ might be challenging.
Establishing an FHIR facade might also be problematic pragmatically, as some companies have no required inner human resources and tech talent.
Furthermore, finding the appropriate experts in the outsourced software development market is an equally tall order.
Final Thoughts Developing an FHIR-compliant API from scratch is a noble intention.
Nonetheless, establishing an FHIR facade likely offers a faster and less painful integration with your legacy backend.
Here are the final steps you should consider before embarking on your FHIR facade journey.
1. Outline and analyze your business needs and make sure FHIR facadeâs functionality covers them comprehensively.
2. Choose your implementation option, as the technicalities of the process will define the lionâs share of proceedings during the project.
3. Define your development approach: in-house versus outsourcing.
Either way, the process will require a team of experts who have already dealt with FHIR facade implementations.
Finally, do not forget to enjoy the result of your digital journey, as establishing an FHIR facade will deal just fine with your outdated data groundwork.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Preparing for the Black Friday shopping rush at a Walmart store in Miami.
anaging supply chains is rough for any company, as the past few years have shown.
For a retailing goliath like Walmart â with its fiscal 2023 revenue of $611 billion â the task is herculean.
Thatâs why it was intriguing to see in late November that Walmart had signed a deal with a company with the unlikely name of Bamboo Rose to help streamline the Bentonville, Arkansas-based retailerâs supply chain.
The Bamboo Rose software is designed to make it easier for Walmartâs buyers and product developers to work with its tens of thousands of suppliers around the globe, improving quality, cutting costs and helping to reduce food waste.
Financial terms of the deal werenât disclosed, but the companies said that Bamboo Rose software would be rolled out in both Walmart and Samâs Club.
âWhat weâve been known for is sourcing and purchase-order management, because if you get that wrong everything else falls apart,â Bamboo Rose CEO Matt Stevens told During the pandemic, snarled supply chains showed just how difficult it is for retailers to get the right products on their shelves at the right time.
Covid-19 highlighted the importance of what was once a back-office backwater and made it a focus for new technology.
Even without global lockdowns, the complexity of negotiating deals with thousands of suppliers, spread across the globe, for potentially millions of products can be daunting.
Getting sourcing right can have a decisive impact on a retailerâs profitability and on whether customers come away happy or frustrated.
Gloucester, Massachusetts-based Bamboo Rose has operated largely under the radar for the past 20 years in a space thatâs crowded with competitors that include Oracle and SAP as well as scores of startups.
Today, it has some 220 customers in apparel, general merchandise and food, including big brands like Urban Outfitters, Home Depot, Kohlâs and American Eagle.
After a slow start, the firm now has annual recurring revenue (a metric that subscription software companies use thatâs generally higher than annual revenue) of $50 million, and Stevens said that he hoped to reach $100 million in a few years.
âEven if I set aside Walmart,â he said, âitâs been a record year for the business.â
In 2001, founder Sue Welch, who had worked in retail, launched Bamboo Rose with Kamal Anand, the companyâs chief technology officer today, under the name TradeStone to build out software to help companies manage their products and sourcing.
At the time, sourcing decisions were generally managed by endless email chains and clunky spreadsheets, costing retailers time and money.
The firm eventually created a marketplace where retailers, wholesalers and suppliers could all work together to speed up the process.
âSueâs vision was to build a collaborative platform,â Stevens said, noting that the company targeted the biggest brands for its software.
âShe focused on the upper end of the market where the messes are pretty ugly.â
âWhat weâve been known for is sourcing and purchase-order management, because if you get that wrong everything else falls apart.â
For which has more than 880 global suppliers and 25,000 unique products, Bamboo Rose worked to shorten lead times and reduce changes to purchase orders.
For one U.S. (which Bamboo Rose declines to identify) with more than 2,300 stores, the software worked to keep shelves stocked, rotate seasonal items, cut the time to market by six weeks and increase profit margins.
Still, there are many different areas of supply chain, and over the past two decades Bamboo Rose has tried a lot of them.
In 2016, the company, which then had 80 retailers on board and was called TradeStone, as Bamboo Rose and added what it called a B2B marketplace for retailers, wholesalers and suppliers to its existing business in product lifecycle management software.
On employee review site Glassdoor, from 2017 to 2021 decry micromanagement, bottlenecks and lack of innovation.
âThe company has gone through a lot of change,â said Lora Cecere, founder of research firm Supply Chain Insights.
âTheyâve always struggled to find their niche.â
In June 2022, Boulder, Colorado-based private equity firm Rubicon Technology Partners bought an undisclosed stake and Welch left later that year.
Stevens, 62, who had previously been the founder and CEO of performance-analytics firm AppNeta, took over as CEO.
AppNeta, which had also been funded by Rubicon, was acquired by Broadcom two years ago, and Stevens hoped to work similar magic on Bamboo Rose with a new team of execs.
Over the past year, heâs acquired product-development firm Backbone PLM and supplier-engagement company Supply Pilot to add their capabilities.
Bamboo Rose CEO Matt Stevens In order to get the company ready for expansion, Bamboo Rose first had to unwind some projects it had done for specific customers that wouldnât scale, he said.
âThatâs been a little painful and rocky,â he said.
âBut weâre through most of it, and every single one of those customers are still here.â
In 2019, Bamboo Rose started a project with Samâs Club, Stevens said.
âSamâs had a sourcing problem, mostly in the dimension of food,â Stevens said.
Food is âsuper, super complex,â he said, because spoilage can occur in transit, potentially causing health issues for customers.
As Samâs rolled out its sourcing platform, Stevens said he began having conversations with Walmart, which had been testing a lot of different vendors for the larger brand, he said.
Walmart spokesman Payton McCormick said by email that the company âwas confident that working with Bamboo Rose is the right choice for Walmart,â but declined to answer a question about its tests of other sourcing software.
âWalmart approached us in early 2023 and said, âWeâve done our due diligence and we donât think youâre perfect, but we agree with Samâs that it begins and ends with sourcing and we see a lot of benefits to our customers to having a global sourcing platform,ââ said Stevens, who flew to Bentonville to meet with Walmart execs in person earlier this year.
Walmartâs McCormick said that the two started working together in February 2023 and expect to complete the rollout over the next 12-to-18 months.
âItâs a big challenge â a little scary,â Stevens said.
âWeâve almost built out a company within a company.â
The more stores, the more complexity, and the more variety â a hallmark of Walmart â the more difficult.
And after the supply chain snags of the pandemic, product sourcing continues to have a moment.
âI think itâs a good time for Bamboo Rose,â Cecere said.
âTheyâve matured and the market has matured, and people feel more need for software because of whatâs happening in the global markets.â
Love it or loathe it, mowing your grass is one seasonal task that canât be avoided if you like a neat lawn.
It can be a tedious chore that cuts into your schedule when youâd prefer to be doing something else.
Robot lawn mowers give you back your precious time, save you from breaking into a sweat in the sun, and provide you with a lawn to be proud of.
It all sounds perfect, but there is a major drawback.
Robot lawn mowers come with a hefty price tag.
We weigh up what the future looks like for robot lawn mowers in 2024, ask if they are up to the task and whether homeowners should splurge.
Unlike their counterparts, robot vacuums, robot lawn mowers canât compete on price, making them more inaccessible.
In comparison, our best budget robot vacuum â the iLife V3s pro â featured in our best robot vacuum buying guide comes in at under $80.
In contrast, an entry-level robot lawn mower will set you back in the region of $1,500.
A robot vacuum could be considered an impulse purchase, whereas a robot lawn mower needs more thought.
And if you thought an entry-level robot lawn mower was pricey, a top-of-the-range model with GPS tracking will set you back $6,000.
Itâs a steep price to pay for a neat lawn, which would easily cover the cost of a ride-on mower.
When you break down the price, itâs cheaper to call a lawn maintenance service to do the job for you.
Paying $40 a week over 26 weeks, will cost $1,040, less than an entry-level model.
However, if you do opt for a budget mower, you will still make a return within two seasons.
Buy or lease With a steep cost, one choice is to lease a robot lawn mower from a company such as Automow â giving you the option to try before you buy.
Having tested various robot lawn mowers for Tomâs Guide, including the Husqvarna Automower 450XH EPOS , senior editor John Velasco believes the high price is partly due to a lack of competition.
Husqvarna is the dominant brand in the market, with few other companies making a mark, apart from Ambrogio, Worx and Robomow.
So, while other manufacturers play catch-up, prices will stay high due to a lack of competition.
But as more companies enter the marketplace, we would expect costs to become more competitive and affordable.
Weâll have to wait and see how this pans out in 2024.
Robot lawn mowers have traditionally struggled in terms of object recognition â crashing against garden furniture or tumbling into flower beds.
Identifying boundaries and obstacles has been an issue.
Having tested his first robot lawn mower five years ago, John says they have greatly improved, especially in terms of maneuverability.
And the latest robot lawn mowers now feature GPS technology, eliminating the timely task of marking out your boundaries with sensors.
Although wireless units mean you wonât have to stake out a boundary, they come at a premium price.
And if your yard is shaded by trees, or objects that will interfere with the GPS sight line, you might be best to stick with a wired unit.
Wired robot lawn mowers require a perimeter wire to work.
You can install this yourself, by placing the wires close to the ground, but youâll find the wires quickly become covered in grass.
The alternative is to ask a professional to set up the boundary wires for you.
Theyâll insert them into the ground between 3 to 4 inches deep.
Just like a robot vacuum, wired mowers move around a defined space and use the sensors to stop and redirect them when they collide with an object.
However, they move around randomly, rather than wireless units that mow in straight lines.
If you desire a striped or checkerboard pattern on your lawn , youâll need to invest in a wireless mower.
So apart from how the boundaries are detected, how do the two mowers differ?
The difference is a system that connects to your phone through Bluetooth or pricier versions that connect through a cellular service and are equipped with GPS mapping technology.
You wonât be left entirely without any work to do.
Robot lawn mowers may cut your grass but they donât trim the edges of your lawn for that perfect finish.
The blades on the mower are set back from the outer casing and even though the mower will reach the boundary, the blades will fall a little short.
John Velasco says he trims the edges of his lawn every couple of weeks to keep it looking neat.
If your yard has very steep inclines, you may find a goat is a better option than a robot lawn mower.
Most robotic lawn mowers, especially if they have a powerful motor, can handle 35% inclines.
But what about those bumps and ruts?
If you are in the market for a neat lawn, youâve probably already filled in any challenging ruts to create an even surface, and your mower wonât need to negotiate any awkward holes.
However, if you have a playful dog that likes to bury bones, you’re best to choose a model with all-wheel drive, as it will cope with holes better than a two-wheel model.
Some robot lawn mowers can also be programmed to avoid any challenging areas, although, once again, this feature will add to the price.
Battery capacity could be an issue if you have a large lawn.
If this is the case, the best option is to go for a robot lawn mower with large battery capacity.
This will save the mower having to return to the base station to recharge during each cut.
If your lawn is on the small to medium size a lower cost mower will be adequate.
Robot lawn mowers are designed to mow grass every day or two, and therefore only cut away a small slither of grass.
In fact, the clippings are so short that you wonât notice them.
This fine material is left as a mulch to nourish the soil, providing you with a healthy lawn, while eliminating the task of raking up grass cuttings.
Robot lawn mowers have advanced over the last few years, but the cost remains high.
New models are now able to mow more complex yards and have added features enabling you to control and track your mower remotely, with some even having the technology to shelter from the rain.
Investing in a robot mower is great if you have a busy lifestyle and you donât want to be spending a couple of hours each week mowing your lawn.
However, if you have a small yard and can mow your lawn without breaking into a sweat, save your money and stick to your manual or electric mower.
Youâll be able to pick one up at a fraction of the cost.
head of an all-hands meeting in March of this year, hundreds of Rebellion Defense employees had been anticipating good news.
Thereâd been a major military contract in the works for months, and the $1 billion AI startupâs leadership had assured them it was all but secured.
Potentially worth tens of millions of dollars, the deal with the Department of Defense was expected to unlock a new round of funding for Rebellion and solidify its reputation as one of the Pentagonâs biggest allies in the sprint to win the AI arms race.
Rebellion had hired dozens of engineers and other experts to work on the product: a tactical threat awareness tool, or âTTA,â that would use AI to make battlefield decisions.
The tool, which Rebellion was trying to sell to the Under Secretary of Defense for Research and Engineering, according to two sources, was core to the companyâs mission of modernizing warfare with sophisticated software, a vision that had attracted millions in investment from the likes of former Google CEO Eric Schmidt and media mogul James Murdoch.
But when CEO Chris Lynch â a tech entrepreneur-turned-Pentagon executive â faced staff at Rebellionâs Washington, D.C. headquarters in March, it was to deliver bad news: They hadnât won the contract.
The following month, approximately 90 employees, many who had recently joined, were laid off.
By September, Lynch was gone as well, as were Rebellionâs U.K. operations.
âTo our knowledge, this contract has still not been awarded today,â Rebellion spokesperson Gia DeHart told in a statement, characterizing it as an âexample of the innovation adoption challenges faced by startups seeking to do business with the U.S. Department of Defense.â
Launched in 2019, Rebellion very quickly became one of the highest profile defense tech companies around.
But its ascent is difficult to track.
It had little proven record as a government contractor and never secured a large commercial market.
Meanwhile, its marquee product, Nova, had been unable to find widespread adoption.
âI just thought: they are a billion dollar company, they have to have [a core product], itâs probably just top secret,â a former Rebellion employee told .
âBy the time I got into the company, people were like, well, we don’t really have one.â
And now it didnât have that expected DoD contract, either.
While Rebellion presented a veneer of success and influence, with frequent visits from military brass to its flashy offices in D.C. and London, interviews with 18 former employees and advisors to the company, and public contracts reviewed by , suggest that it overwhelmingly benefitted from aspirational, investor-induced hype As CEO and cofounder, Lynch brought an outsiderâs chutzpah to the mission of bringing AI to the military-industrial complex Earlier this year, Lynch had painted rosy projections for 2023 to some employees and the board, varying from $50 million to almost $100 million in total contract value, according to three former staff.
The actual figure, these people said, was closer to $20 million.
Multiple sources in position to know told that Lynchâs departure âto tackle new endeavors,â , was engineered by a board tired of his overstated financial speculations and failures to land key contracts.
Meanwhile, after it spent $430,000 lobbying the federal government on AI matters according to disclosures, government procurement records show Rebellion has received just $7.2 million from publicly listed contracts this year.
In 2022, that number was $6.2 million.
(Not all of its federal contracts may be public; Rebellion declined to comment on the scope of its government deals.)
“Rebellion was built to tackle some of the most audacious challenges for the defense of our country and allies, and that vision is more important now than ever before,” Lynch told in an email.
âAt the start of the year, I stepped down from Rebellion after 4.5 years of building.
I was ready for incredible new leadership to scale the next phase of the company.â
Responding to a detailed list of questions about Rebellionâs revenue, contracts, management issues, Rebellion declined to share financial figures or comment on personnel matters.
Spokesperson DeHart would say only that the company had reported a 50 percent jump in its annual contract value this year.
âLike most defense start-ups, we had wins and losses as we worked to find product-market fit,â she said in an emailed statement.
âOver the last six months, the company took a series of deliberate steps to adapt and refine our strategy to ensure long-term sustainability.â
Incoming Rebellion CEO Ben FitzGerald, an investor and former executive chairman tapped for the new role earlier this week, conceded to that the company had faced a ânumber of management challenges,â along with âacquisition challenges that plague the Department of Defense.
But he said âRebellion has since right-sized our business, and we have an incredible team in place.â
A Pentagon spokesperson declined to comment on the Department of Defenseâs relationship with Rebellion.
Rebellion Defense co-founder and former CEO, Chris Lynch.
As CEO and cofounder, Lynch brought an outsiderâs chutzpah to the mission of bringing AI to the military-industrial complex.
âI’ve run into so many people in my life who want to tell me about how we should be doing other things than working in defense, or investors who believe that we shouldn’t be starting companies in defense,â Lynch told in a 2022 interview.
âAnd itâs chicken shit.â
Prior to Rebellion, Lynch was a Seattle-based tech entrepreneur whose Celebrity Hookup, a hot-or-not app and website for attractive famous people, gifting site Thoughtful and Sparkword, an iPhone word game.
After getting a job with the U.S. Digital Service, he was later hired to lead a division within the Department of Defense called Defense Digital Service tasked with getting AI tech into the government at speed.
There, he met Nicole Camarillo from the U.S. Army Cyber Commandâs talent team.
When an employee revolt forced Google to the Pentagon with AI tools that could label drone footage, Lynch and Camarillo saw an opportunity: if the tech giants wouldnât supply the government with cutting-edge defense software, maybe they could.
In early 2019, the duo left their government roles to build Rebellion Defense, naming their new company after Star Warsâ Rebel Alliance.
They linked up with a former U.K. Cabinet Office official, Oliver Lewis, who had been focused on transforming digital operations for the British civil service.
He joined to lead a London Rebellion division with the goal of selling to the U.S. and U.K. simultaneously.
According to the cofoundersâ origin myth, the trio wrote down their âRebellion Manifestoâ in a coffee shop, defining âwhy we would be different and why this matters,â Lynch previously told .
They landed on a mission to deliver AI technologies that âdefended democracy, humanitarian values and the rule of law,â Lewis added.
(Camarillo left the company earlier this year.
She didnât respond to a request for comment.)
Despite its original pan-Atlantic strategy, Rebellionâs U.S.-based leadership soon began to worry that its U.K. operations were a distraction from much larger revenue opportunities with the Pentagon More than a dozen employees joined Lynch from the Pentagon.
And his pitch seemed to land among defense leaders and Silicon Valley luminaries alike: the late former Defense Secretary Ash Carter joined the companyâs board and Schmidt, the former Google CEO, backed its $11 million seed round with his venture fund novation/ target=”_blank”>Innovation Endeavors; the founders described Schmidt as a âfounding partnerâ in .
(Multiple former Rebellion employees and DoD staff said Schmidt was a largely unengaged investor.
Schmidt declined to comment )
As Rebellion touted contracts with the , venture capital firm Kleiner Perkins led a $60 million series A funding round in April 2021, again joined by Schmidtâs novation/ target=”_blank”>Innovation Endeavors and other investors.
Just months later, Insight Partners led a $225 million funding round that valued the company at $1 billion.
Key to its investor pitch at the time was a suite of mission-oriented tools, which included a product called Nova.
An automated network bullet-proofing service
, Nova used white hat hackers to identify vulnerabilities and conduct recurring screenings to detect cybersecurity threats.
It had pilot contracts with the Department of Defense and the U.K.âs Ministry of Defense, but it didnât immediately take off: of the contracts Rebellion inked with government agencies, many were free trials and ultimately discontinued, two sources familiar with the matter told .
In a statement, Rebellion said the trials were âincubation-type engagementsâ meant to âdemonstrate our unique product offerings.â
Rebellion also faced the reality of government contracting, where payouts for startups can be small and incremental.
In one payment in 2021, Rebellion received $50,000 from the U.S. Air Force for a contract to detect cyber vulnerabilities; the agency paid $650,000 the following year for a contract related to traffic engineering, according to procurement records.
By the end of 2022, some contracts had generated payments around $1 million.
Despite its original pan-Atlantic strategy, Rebellionâs U.S.-based leadership soon began to worry that its U.K. operations were a distraction from much larger revenue opportunities with the Pentagon, two former employees told .
In late 2022, Lewis, with Rebellion COO Bob Daigle and another executive named Alex Burton, conceived of a new company that they hoped would preserve Rebellionâs U.K. mission in the face of looming cuts, by absorbing its U.K. contracts and London team, two sources with knowledge of their plans told .
Lewis approached an outside investor about the move, but when the board learned of his efforts to secure potential funding the project hit a wall, these sources added.
He left Rebellion in November 2022 along with Daigle and Burton.
Soon after, the company fired roughly half of its London office, according to multiple sources.
The details surrounding Lewisâ departure have not been previously reported.
Daigle and Burton didnât respond to requests for comment.
Rebellion appears eager to close out its chapter with Lynch, who remains one of the companyâs largest shareholders.
In a Dec. 18 announcing FitzGerald as its permanent CEO, the company noted that it had undergone âstrategic restructuringâ under interim CEO Barry Sowerwine, and had seen a fifty percent increase in annual revenue.
It currently employs about 100 people and an investor, who wished to remain anonymous, told that it has âmultiple yearsâ of runway at its current operation.
The U.S. Army a contract for Rebellionâs Nova product; government procurement records indicate the company received a $6 million payment as part of the contract.
Lynch, for his part, seems intent on remaining in the defense space, though itâs unclear how the veteran founder plans to reinvent himself.
âAs for me, I am focused on what comes next and how our military utilizes technology to overmatch adversaries today and tomorrow,â he told .
âI canât think of anything more important.â
In the realm of research, a significant shift has occurred, marking the transition from the physical confines of libraries and archives to the expansive digital universe.
This transformation signifies a true revolution, reshaping our pursuit of knowledge in the internet age.
Research, a cornerstone of human progress, has evolved remarkably.
From the times of ancient scholars to modern researchers, the quest for knowledge has been a constant.
The gathering of data, once a laborious and time-consuming task, often spanning months or years, has been dramatically condensed by online databases and search engines, offering rapid access to information.
This evolution of research methodologies prompts the question: How did this transformation occur?
Evolution Of Research Methodologies Our research methods have evolved in tandem with societal advances, from the storied libraries of Alexandria to the high-tech data centers of Silicon Valley.
The digital revolution has , positioning technology as an indispensable ally in the quest for knowledge.
This article delves into the transformative role of the internet and web scraping in research, highlighting their profound implications and the immense power now at researchers’ fingertips.
Synergy Between Technology And Research The integration of traditional research methods with modern technology has sparked a renaissance in the pursuit of knowledge.
Tools such as data analysis software, online surveys and digital collaboration platforms have become essential in piecing together complex puzzles from a researcher’s desktop.
This blend of old and new enables researchers to rapidly test, review and refine hypotheses, significantly shortening the journey from curiosity to discovery.
, breaking down barriers to information access and democratizing knowledge, enabling anyone, anywhere, to tap into scholarly articles, data sets and libraries once reserved for a privileged few.
This shift not only empowers individuals but also fosters a more informed citizenry.
However, the sheer scale of data available online poses a challenge for manual compilation and analysis.
The Role Of Web Scraping In Modern Research Web scraping, synonymous with modern research, is an automated technique used to extract large volumes of data from websites.
This method transforms the chaotic internet into a structured information repository, reflecting our insatiable thirst for knowledge.
Web scraping provides a multi-dimensional view of the information landscape, uncovering patterns and correlations that may elude the naked eye.
A Practical Guide For Researchers: Harnessing Web Scraping For researchers eager to utilize this tool, web scraping is a boon, accompanied by a set of guidelines: Understand precisely which data will best serve your research, whether it be social media trends, market statistics or educational resources.
Select web scraping tools that match your technical proficiency and research requirements, ranging from user-friendly platforms to more complex, customizable software.
Always scrape data , respecting data privacy laws and website terms of service.
Ethical research is credible research.
Simply having raw data is not enough.
Apply data-cleaning techniques to ensure accuracy, then use robust analytical methods to extract insights.
The digital landscape is in constant flux.
Keep abreast of legal and technological developments in web scraping to maintain the validity and relevance of your work.
Predicting A New Dawn In Research The potential of web scraping in research is vast, poised to be a catalyst for innovation and enabling swift, adaptive studies.
Its predictive power could redefine entire disciplines, from market research to the social sciences.
Web scraping extends beyond mere data collection; it enables comprehensive market analyses and real-time monitoring of public opinion shifts and enhances academic studies with broader data sets for analysis and validation.
The new digital landscape brings with it responsibilities, particularly regarding data privacy and ethical information sharing.
Researchers must adeptly navigate these challenges, balancing ease of data access with considerations of consent and ownership.
The reliability of sources, data veracity and methodological integrity remain crucial in maintaining research credibility.
Conclusion As we advance through the digital revolution, the landscape of research is not just growingâit’s flourishing.
By embracing digital tools and the myriad opportunities they offer, researchers are making pivotal contributions to their fields, aiding in the creation of a more informed and interconnected world.
This transformation of research, brought on by the internet age, is not just a change; it represents a paradigm shift, heralding a new era in our journey of exploration and understanding.
While opinions may vary, one fact stands out: Modern technologies have cultivated an environment where hypotheses can be rapidly tested, reviewed and refined.
Today, the journey from “what if?” to “eureka!” is shorter than ever before.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
In the workplace, considerable focus is often put on what truly drives and fulfills us as individuals.
Many individuals mistakenly equate happiness with the ultimate purpose of their professional lives, perceiving happiness as a fleeting state of comfort and momentary satisfaction.
However, has indicated that relentlessly pursuing happiness can paradoxically lead to unhappiness.
It becomes evident that mere happiness is insufficient to sustain one’s motivation and sense of purpose at work.
Researchers suggest that in a professional context, the source of despair and emptiness is not the absence of happiness but rather the absence of ” .”
Our contemporary culture strongly emphasizes the pursuit of happiness in the workplace.
However, it is increasingly evident that genuine fulfillment and motivation at work stem from seeking meaning.
As renowned psychologist Martin Seligman , meaning arises from belonging to and contributing to something greater than oneself while striving to reach one’s full potential.
I have seen that individuals who find meaning in their professional lives exhibit greater resilience, excel academically and professionally and enjoy longer and more fulfilling careers.
Fostering meaning and purpose in the modern workplace relies on four vital pillars: belongingness, purpose, work transcendence and the art of narrative-shaping.
I intend to outline the principles behind these pillars and posit genuine real-world scenarios in which next-generation technologies bring these pillars to life within organizations.
1.
Belonging In The Workplace It is about recognizing and valuing each person’s worth, fostering genuine care among colleagues and creating a welcoming organizational culture.
This involves feeling valued by your team, being accepted within the group and aligning with the company’s values.
Belonging also includes fairness, self-respect, acknowledgment, personal growth, recognition, fame and a sense of ownership in the organization.
Modern workplaces need technology for collaboration, communication and rewards to enhance this sense of belonging.
Leadership should encourage employees to use these tools effectively.
Can technology further boost belongingness?
Virtual spaces and communication tools help, but advanced technologies like data analytics can intelligently track individual contributions to company goals.
Imagine a platform that aligns activities with specific objectives, provides comparisons and offers personalized suggestions.
AI platforms can reduce frustration, promote self-directed learning and increase alignment with corporate goals, ultimately enhancing belongingness.
2. Work Purpose
A sense of purpose is a strong motivator for employees, driving them to excel in their careers.
It goes beyond personal happiness and focuses on how one’s skills and contributions can benefit the organization and society.
Instead of pursuing individual desires, it’s about aligning with the organization’s goals.
Unfortunately, many workplaces lack the technology to support this journey of self-discovery and alignment.
To address this, we should consider moving away from traditional engagement methods and embrace innovative approaches that empower employees to find their true purpose at work.
Advanced data analytics and AI tools can help individuals align their skills and interests with the organization’s objectives.
These tools can automate routine tasks, provide context-based information and create a web of knowledge that shows how an individual’s work relates to higher-level processes, ultimately enhancing understanding of work purpose, increasing productivity and reducing stress.
3. Work Transcendence
This phenomenon, known as transcendence, occurs when daily routines fade, and one’s sense of self diminishes, leading to a deep connection with what feels like a higher reality.
It’s like the state of flow, in which time and place blur and work becomes profoundly fulfilling.
Increased employee engagement can drive individuals into this flow state, .
Technology can assist employees in reaching this state of deep engagement.
Virtual reality (VR) and augmented reality (AR) apps can create immersive experiences, eliminating distractions.
Moreover, technology can analyze an individual’s work patterns, goals and requirements through data analytics.
It can implement do-not-disturb features across various enterprise apps, offer suggestions and provide scores to help individuals gauge their progress toward achieving a state of work transcendence.
4.
The Art Of Narrative-Shaping Storytelling is a potent tool for gaining insight into one’s career journey and personal growth.
It’s crucial to convey to the workforce that individuals have the power to shape and redefine their professional narratives while maintaining factual accuracy.
Your professional journey isn’t just a series of events in chronological order; it’s a narrative that can be edited, interpreted and told to align with your evolving identity and ambitions.
Digital tools and platforms enable individuals to document and share their professional journeys through online portfolios, blogs and social media.
Furthermore, data analytics can assist in tracking progress over time, offering valuable insights into professional growth and allowing for narrative adjustments as one evolves.
Imagine a platform that helps individuals view their various activities, events and results as a purposeful web of productivity information, connecting work products and illustrating the evolution of complex business processes through timeline-based visualizations.
Such a platform would empower professionals to uncover and shape their narratives effectively.
Conclusion Contemporary technologies promise to enrich the workplace experience by fostering a sense of belonging, purpose, transcendence and narrative shaping.
When employed with care and consideration, these tools can cultivate a work environment where employees discover genuine significance and fulfillment.
This, in turn, can lead to heightened productivity, increased engagement and greater job satisfaction in their careers.
Any platforms or solutions that promote these objectives should offer practical and personalized features that empower individuals to embrace these dimensions of meaning in today’s workplace, making them integral components of a thriving professional landscape.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
The ban which yesterday saw the may not be the last.
A Stanford law professor warns that we are likely to see more patent clashes between the consumer electronics and worlds, and that the results can be both unpredictable and dramatic ⊠A quick recap on the Apple Watch ban Back in 2013, reportedly contacted health tech company Masimo to discuss a potential collaboration between the two companies.
Instead, claims Masimo, Apple used the meetings to identify staff it wanted to poach.
Masimo later called the meetings a âtargeted effort to obtain information and expertise.â
Apple did indeed hire a number of Masimo staff, , ahead of the launch of the Apple Watch.
Masimo CEO Joe Kiano later expressed concern that Apple may have been trying to .
The company describes itself as âthe inventors of modern pulse oximeters,â and its tech is used in many hospitals.
In 2020, and infringing 10 Masimo patents.
The lawsuit asked for an injunction on the sale of the Apple Watch.
That court .
The International Trade Commission (ITC) subsequently upheld this ruling, which .
That ban doesnât officially take effect until December 25, but Apple decided to withdraw the devices from sale â and .
There are still , but thereâs as yet no sign of this happening.
This ban may not be the last A piece in the says that this is unlikely to be the last patent clash between two different worlds, which are increasingly converging: consumer electronics and health tech.
The paperâs Richard Waters likens it to what happened with the first smartphones.
The case is a dramatic demonstration of the collision of different IP regimes when digital technology moves into new markets.
In the early days of smartphones, the convergence of mobile communications and computing produced a barrage of lawsuits between computing companies such as Apple and Google on the one hand, and mobile technology companies including Nokia and Motorola on the other.
Much the same is now happening on a broader front as mobile computing invades markets including healthcare, where medical device companies have their own IP moats.
Stanford University law professor Mark Lemley argues that these battles have the potential for similar high-stakes outcomes in future.
Cases such as this have become âa very expensive game of chickenâ because the final result is hard to predict and can have a very dramatic result, says Lemley.
One reason for this is that, while courts have a range of sanctions open to them when they find a patent has been infringed, the International Trade Commission (ITC) has only one: banning the products from the US.
Contrasts with AliveCor outcome The unpredictability of such cases is underlined by a near-identical case where the Apple Watch Series 8 and Ultra were almost banned.
Exactly as with Masimo, AliveCor , hoping to secure a partnership agreement.
And again, that partnership never materialized, and Apple launched the tech on its own.
AliveCor claimed that the ECG sensor in Apple Watches , and â meaning that the Apple Watch Series 8 and Ultra models could have been banned.
But Apple successfully applied to have , and the ITC duly upheld that ruling, so the Apple Watch ban never took effect.
The award-winning Eve Play is a compact device that can bring Apple AirPlay 2 streaming to almost Once upon a time, Apple used to sell a brilliant little gadget called the AirPort Express.
The device was a wireless access point and supported Appleâs AirPlay for streaming music to older audio systems using a smartphone, tablet or computer.
For some reason, Apple stopped selling the AirPort Express and adding AirPlay to an existing audio setup wasnât easy.
Like Appleâs AirPort Express, Eve Play can update almost any audio device with an analog or digital input into a streaming system that can play music from Tidal, Spotify, Qobuz, and Amazon.
In fact, with AirPlay, you can stream any audio to multiple devices and have music all over the home.
The Eve Play is the answer if you still own a much-loved music system.
The new Eve Play received a Red Dot Design Award in 2023 and as well as looking sleek and stylish with its anodized aluminum casing, it will work with just about any audio device, like amplifiers, active speakers and even soundbars.
A great feature of the Eve Play is its ability to perfectly synchronize devices in a multi-room setup so you can use it with other AirPlay-compatible speakers and devices in your home.
Eve Play has all the connections you need for linking to an audio or hi-fi system.
It can connect to Eve Play comes from a German manufacturer that specializes in home automation.
This automation means you can integrate Eve Play with your home automation system so that you can be welcomed home with a favorite piece of music or radio show.
At the heart of the Eve Play is a Texas Instruments PCM5122A DAC (digital-to-analog converter) with a signal-to-noise ratio of 112dB. This chipset produces audiophile quality at resolutions up to 48kHz, just like some premium wireless streamers costing up to $1,000.
At the rear of the Eve Play, there are most of the connectivity options you could want for connecting the device to an audio system.
The Eve Play connects to a network using 2.4/5 GHz Wi-Fi or Ethernet.
Thereâs an RCA analog output as well as both optical and coaxial digital connections for linking the Eve Play to audio equipment with built-in DACs.
Eve Play can be used to receive audio streamed from an iPhone, iPad or Mac using AirPlay 2.
Itâs a good idea to use Eve Playâs RCA Phono output because the audio is being decoded via the Texas Instruments PCM5122A DAC.
The sound quality of that chipset is excellent and the Eve Play brings an openness to the music, especially when streaming hi-res digital audio source files.
Setting up the Eve Play is straightforward, using an iPhone or iPad running the Eve app.
The app uses the iPhoneâs microphone to sample the Eve Play synchronisation, ensuring that it perfectly matches any other AirPlay devices that are also part of a multi-room audio system.
Accurate synchronization is vital if you play the same music in different rooms around the home to avoid an annoying echo or lag.
Top marks to Eve for including this critical feature.
If you have an older audio device or equipment that doesnât support music streaming, the Eve Play is an ideal way of adding that function.
This handy little device produces high-quality audio and works with almost anything with RCA Line-In, Optical or Coaxial inputs.
Eve Play ships with a power supply and a cable for connecting to an amplifier or other device with RCA Phono inputs.
Eve Play is more expensive than simple AirPlay solutions, like Belkinâs SoundForm Connect.
It has that vital synchronization feature and more output options.
If you want to update that vintage hi-fi system to the latest streaming technology, Eve Play is an excellent solution.
Eve Play is available now from or and costs $149.95.
The rising sophistication and frequency of cyber threats, including data breaches and ransomware attacks, are driving organizations to reevaluate their traditional security models, fueling the push for zero-trust security.
The core tenet of zero trust is not to trust any entityâwhether internal or externalâuntil it can be verified and authenticated for access to the network and each application individually.
The concept emphasizes continuous, very fine-grained verification and the principle of “never trust, always verify.â
Most IT security professionals have successfully moved beyond focusing on perimeter-based protection to adapt more granular and adaptive approaches.
Many look at the network, dividing it into smaller, isolated segments to limit lateral movement for attackers if they breach it.
Segmentation is a key aspect of damage control in a network breach.
They’re encouraged to drill down another level to identity-centric security and the need for strong authentication and access control mechanisms, such as multifactor authentication (MFA) and least privileged access for individual applications.
All that said, there’s a level of security that’s far deeper and more targeted yet often fails to be considered as part of zero trust: cryptography, which helps ensure both the confidentiality and integrity of the communicated data.
Without proper cryptography, your zero-trust strategy is meaningless.
In a recent study, researchers at Quantum Xchange looked at data traffic and attributes across a range of the âciphersuites, plaintext, TLS 1.0, TLS 1.2, TLS 1.3 and SSL 3.0.
These and others (like MD5 and SHA-1) have been the backbone of data encryption, but technological advances have spotlighted weak spots and created openings for network and data risks.
What our research found was that data encryptionâthe most basic layer of securityâis being woefully undervalued.
âą Nearly two-thirds (61%) of network traffic analyzed wasn’t encrypted at all.
In fact, on one hospital network, 92% of data traffic has no encryption.
âą Up to 80% of network traffic that did use encryption was found to have detectable flaws.
âą
An alarming 87% of encrypted, host-to-host relationships depended on TLS 1.2âtransport layer security that’s known to have vulnerabilities and limitations that were addressed in version TLS 1.3, released five years ago.
Across both commercial and governmental bodies (government entities are charged with meeting a whole host of cybersecurity mandates, including the and its corresponding ), security professionals are working to integrate zero-trust security into future-forward measures like crypto-agility and quantum-safe encryption.
But without a shift in thinking about how a more profound encryption strategy can support those efforts, the fundamentals of any zero-trust strategy are endangered.
Application integrity, data confidentiality and device access are all compromised if encryption is shaky.
Rather than risking compliance and regulatory action, these avoidable lapses can be mitigated.
Here are steps every company should consider as they integrate cryptography into executing against their zero-trust plan.
1. Evaluate compliance with your organizationâs zero-trust policies.
2. Move from one-time scans to continuous monitoring, integrating alerting on weak cryptography into your incident response processes.
3. Split the focus between on-network and on-disk cryptography.
4. Evaluate cryptography as âin use,â not simply as âdesigned.â
For example, frequently, strong certificates are found on endpoints, but on the network, the cipher is downgraded.
5. Focus on all applications, including on-premise, third-party and in the cloud, as VPN tunnels can hide poor encryption.
6. Know who has access to the cryptographic keys, especially when someone who may have installed the keys leaves the company.
In summary, strong protection starts when you have visibility into the true state of your cryptographic infrastructure.
It’s a cornerstone of zero-trust security because it helps protect data, verify identities and secure communications in a way that aligns with zero-trust principles.
It ensures that security isn’t solely dependent on network perimeters but is applied throughout the network and data life cycle.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
China released draft guidelines Friday aimed at curbing excessive spending on online gaming in the latest move by the ruling Communist Party to keep control of the virtual economy.
The proposal caused shares in the biggest Chinese gaming companies, Tencent and NetEase, to plunge in Hong Kong.
China’s gaming regulator, the National Press and Publication Administration, issued guidelines saying online games cannot offer incentives for daily log-ins or purchases.
Other restrictions include limiting how much users can recharge and issuing warnings for “irrational consumption behavior.”
Shares in Tencent, China’s largest gaming company, dived about 16% before recovering some ground to close 12% lower.
Rival NetEase’s stock price lost about 25%.
Beijing has taken various measures against the online games sector in recent years.
In 2021, regulators set strict restrictions on the amount of time children could spend on games to just three hours a week.
A state media news outlet described online games as “spiritual opium,” an allusion to past eras when addiction to the drug was widespread in China.
Approvals of new video games also were suspended for about eight months, resuming only in April 2022 as authorities eased a broader crackdown on the entire technology industry.
© 2023
The Associated Press.
All rights reserved.
This material may not be published, broadcast, rewritten or redistributed without permission.
When ChatGPT hit the scene in November 2022, it put generative AI into the hands of anyone who wanted to try it.
Now, it seems everyone has an opinion about using generative AI to simplify various aspects of work and life.
Generative AI has thrust the AI discussion right into the mainstream because its outputs are so relatable.
Anyone can play around with it and imagine the ways they might be able to make use of it.
That’s certainly true in business settings.
Thomson Reuters’ August 2023 shows that people are readily embracing generative AI to enable productivity and efficiency and empower them to provide a better experience for their customers.
However, while ChatGPT‘s big splash may give the impression that AI is new, it’s actually been solving big business problems behind the scenes for years.
As someone who has been fully immersed in the business applications of AI for the past decade, I want to offer a peek under the hood to show how AI can help businesses operate more efficiently and effectively.
Understanding these applications can help you derive more benefit from the current AI darlingâgenerative AIâas well.
Keeping The “Intelligence” In Artificial Intelligence Generative AI has become a must-have tool for businesses, and its outputs can create real efficiency in countless ways.
However, the outputs depend on the information that goes into itâwhether that’s the prompts it’s given or the data it’s trained on.
Modern organizations can use AI to take the huge volumes of data they have access to and turn that data into intelligence.
In B2B marketing and selling, for example, companies use incredible depth and variety of data to reach the right customers with relevant messages at the ideal time in their buying journey.
Having the data is only the first step, though.
Without the ability to analyze it and turn it into intelligence, revenue teams operate on guessworkâwhich contributes to in wasted spend and opportunities each year, according to Boston Consulting Group.
This is where other types of AI come in to analyze the data and turn it into intelligence that businesses can use.
Four key types of AI-powered analytics are useful in business, including sales and marketing initiativesâdescriptive, diagnostic, predictive and prescriptive.
Here’s a simple way to think of them: âą Descriptive analytics pull historical dataâwhat has already happened.
It’s best used to present KPIs such as year-over-year sales growth and helps add data to presentations and dashboards.
âą Diagnostic analytics looks at why something happened.
It helps organizations understand the cause of trends and correlations between variables.
âą Predictive analytics help determine what’s most likely to happen nextâfuture outcomes.
In sales and marketing, organizations can use predictive analytics to analyze millions of buying signals to predict which accounts are most likely to buy what they’re selling so they can appropriately prioritize time, budget and resources.
âą Prescriptive analytics helps determine what to do about the insights gleaned from your dataâfor instance, how to leverage them to improve revenue.
Best Practices And Considerations For Generative AI In Marketing And Sales Together, these uses of AI make generative AI tools more powerful.
Given the right information, generative AI can improve processes and help build contentâeven generate revenue by giving users the ability to more effectively identify, reach and engage with prospects.
Some of the ways organizations can effectively leverage generative AI in marketing and sales work include creating more personalized messaging to maximize the effectiveness of email outreach and marketing campaigns, aiding SEO search research by uncovering keywords that align with targets, and maximizing advertising ROI by tailoring ads to buying stage, persona and interest.
It’s important to keep in mind a few considerations when incorporating generative AI into workflows, including: In other words, quality results require quality data.
In addition, it takes time to fine-tune and prompt the tool the right way.
Robust data and the AI required to process it into intelligence can create easier processes, more effective workflows and more accurate outputs.
To promote generative AI from novelty to needle-mover, it’s important to set and measure real business goals for the technology.
For example, we set a goal of generating 10% of our pipeline from AI-powered conversational emails in one year.
That big goal stretched the team to think differently and consider new technology and tech to make it happen.
The approach was so effective that we accomplished the goal in less than six months, propelling our team to continue the campaigns and expand into additional use cases. .
With the generative AI wave emerging so quickly, it’s important to remember that technology is not a one-and-done type of situation.
We can expect things to keep changing quickly; the deluge of new tools and applications that are emerging seemingly daily is proof of that.
Stay agile and be ready to adapt to what’s next, whether that’s government regulation or new tools and use cases.
Creating An Effective AI Toolkit
Once people know what to expect from technology and have the digital skills needed to work with it, the right tools can empower them to be more effective at their jobs.
They’ll be armed with powerful insights while saving time, avoiding stress and eliminating needless busy work.
There’s no doubt that generative AI is a game-changer.
It has the potential to create real results for marketers and sellers.
When quality data and intelligence are feeding it, it becomes more than a noveltyâit’s a tool to truly transform the way we work.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Temperature plays a huge role in how quickly we fall asleep and what the quality of sleep is like once we drift off.
A 2023 study shows that the optimum temperature for sleep is between 68 and 77 °F (20 and 25 °C), so you may find yourself struggling to fall asleep if your bedroom is too warm (or too cold).
Feeling overheated in bed quickly disrupts your sleep and can leave you feeling dehydrated and even cause hot flashes and night sweats.
While turning on the AC or adjusting your smart thermostat so that your room is under 77 °F can help, some hot sleepers need some extra help from their mattress.
While the best mattresses usually include temperature-regulating materials to cool hot sleepers, some beds are specifically designed for those who overheat at night.
These are the best cooling mattresses of the year, designed with state-of-the art materials and mattress technologies to help you sleep cool and dry.
But how do they work â and do they work?
We cover everything you need to know in this guide.
Plus, weâll recommend some of our favourite cooling beds to buy for less in this month’s mattress sales .
A cooling mattress is any type of bed (from innerspring to memory foam) designed to keep sleepers cool and dry using temperature-balancing materials.
While a lot of mattresses do provide some cooling features â such as organic, breathable materials or coil layers for ventilation â this doesnât mean they are technically designed for cooling.
Cooling mattresses, meanwhile, are made to prevent you from overheating, which make them more expensive than standard beds.
Cooling mattresses can employ a wide range of different cooling methods.
Here are the most popular methods.
Innerspring and hybrid mattresses tend to be the most breathable thanks to the springs or coils they contain, which promote airflow for a cooler sleep.
The best hybrid mattresses usually come with a supportive layer of individually wrapped coils and springs to ensure airflow, as well as a breathable cover and a layer of memory foam infused with cooling gel.
Mattresses made from latex are also naturally cooling, and can help reduce overheating at night.
Memory foam beds, on the other hand, are the least cooling type of mattress as they are notorious for trapping body heat However, cooling mattresses are made from all sorts of materials â even memory foam â as they use advanced temperature-regulating tech and materials to boost the cooling power of a mattress and offset any heat-trapping tendencies.
So, do cooling mattresses actually work to stop night sweats, overheating, and generally sleeping hot?
While testing out some of the most popular cooling mattresses on the market, our testers found that most cooling mattresses prevented overheating and kept them cool and dry â or at least cooler than before.
While cooling mattresses are designed to regulate temperature, some mattresses canât live up to their promises when up against those who suffer from extreme overheating or night sweats.
Anyone who lives in a warm climate, suffers from overheating or night sweats, or deals with hot flashes brought on by natural conditions such as menopause would benefit from a proper cooling bed.
In fact, a study on menopause found that sleeping on a cooling mattress can reduce the frequency and severity of sleep disturbances by over 52%.
For example, we discovered that many common cooling methods â such as the addition of gel and copper into the foam â didnât stand up against heat waves or hot flashes.
In fact, we advise anyone to take the benefits of gel and metal infusions with some scepticism, as we find the temperature difference it creates to be minimal.
However, other cooling methods have been shown to make a big difference to temperature.
For instance, researchers found that Phase Change Material in mattresses lowered skin temperature and improved human heat dissipation by up to 25.6% compared to a conventional mattress.
In fact, we discovered that gel and metal infusions tend to work better in tandem with PCMS.
Another cooling material we found to be effective against night sweats and overheating is GlacioTex, which is used in a lot of cooling bedsâ covers.
We found this fabric to be cool-to-the-touch and excellent for keeping dry when prone to night sweats.
We also discovered that the addition of coils, springs, and open, air-circulating structures also help to enhance breathability for a cool nightâs sleep.
Cooling gel-infused memory foam was created to offset the heat-trapping effect of memory foam and to provide a cooler surface to sleep on.
The gel is designed to draw heat away from your body and wick away moisture from night sweats to keep you cool and dry, and itâs infused into foam by a number of ways: incorporating a gel pad layer on top or in the middle of the mattress; pouring it into the foam mold as it sets; or placing it within the mattress layers in bead form.
Despite its prevalence in memory foam beds, cooling gel is synthetic and not for those looking for an all-natural bed.
According to sleep brand Turmerry , cooling gels contain toxic chemicals, including benzene, formaldehyde, and naphthalene, which are unfit for human use as well as the environment.
According to eco-luxury mattress brand Avocado , exposure to formaldehyde which is known to cause adverse health effects .
1. Cocoon by Sealy Chill: from $839 $539 at Cocoon by Sealy This foam mattress made our best cooling mattress guide for its excellent value, pressure relief, and, of course, temperature regulation.
It provides both comfort and coolness through the brand’s Phase Change Material memory foam, which dissipates body heat while cradling your pressure points.
Couples may also want to check this cooling bed out, as our Cocoon Chill Memory Foam Mattress review found that the memory foam delivers superb motion isolation Plus, you’ll never pay full price for the Chill mattress â the evergreen Cocoon by Sealy mattress sale knocks 35% off MSRP, which brings the price of a queen to $899.
The bed also comes with a 10-year warranty, 100-night sleep trial, and free shipping.
2. Purple Mattress: from $799 $599 at Purple In our Purple Mattress review , we were impressed with the Purple Originalâs polymer grid system – there are over 2,800 open air channels to allow cooling throughout the mattress – the breathable foams, and soft flex cover.
Extras include a 10-year warranty, 100-night sleep trial, and free shipping.
You can’t always get a discount on Purple’s mattresses, but there’s currently up to $400 off the original mattress.
A queen is priced at $999 (was $1,399) with this discount, which beats the Black Friday deal that only knocked $200 off the MSRP.
3.
GhostBed Luxe Cooling Mattress: from $2,595 $1,298 at Ghostbed The GhostBed Luxe uses the brandâs patented Ghost Ice technology, which testers for our GhostBed Luxe Cooling Mattress review said makes the top cover cool to the touch but absorbs heat throughout the night and away from the body.
We also rate it as a great mattress for side sleepers and couples thanks to its medium-plush feel and strong motion isolation.
The GhostBed Luxe is never sold for MSRP â there’s a permanent 50% off deal.
While the offer of freebies isnât currently available, you can sometimes get two free pillows with your purchase.
Extras include a 25-year warranty, a 101-night sleep trial, and free shipping.
2024 is shaping up to be a consequential year for the 3-D printing industry.
On the business side, the industry is facing a wave of consolidation as companies jockey to solidify their position, gather market share and avoid getting squeezed out.
On the tech side, the convergence of 3-D printing with next-gen technologies like AI is set to transform the industry further, unleashing innovative applications and use cases.
Here are four major trends that will impact the industry in the coming year.
1.
Consolidation will take center stage.
Several years ago, a number of leading 3-D printing companiesâincluding âmade headlines when they announced plans to go public via blank check companies, also known as SPACs.
They received at the time.
Since then, like a lot of tech companies that debuted during the pandemic era of 2020 to 2021, theyâve seen their , some by up to 90%.
Given this remarkable value evaporation, it stands to reason that 2024 is the year that something will have to give, as there is now tremendous pressure on CEOs of 3-D printing companies to significantly improve their outcomes and to, among other factors, stem the growing tide of investor dissatisfaction with the sector as a whole.
The result will likely yield even greater consolidation in the industry.
Furthermore, over the next 12 months, many promising companies, in the presence of both comparatively few recent combinations and the sparse capital investment climate, will have no choice but to consolidate because they simply wonât have the cash to continue to operate as standalone entities.
2. Customers will benefit from changing industry dynamics.
This likely wave of consolidation will be a positive development for customersâand has already started.
Essentium recently announced our and BigRep, a Berlin-based company, recently announced their , followed by a with a blank-check company to go public on the Frankfurt exchange.
Looking at the industry from a customer lens, as things stand today, there are simply too many offerings in the 3-D printing market, and customers are unsure where to turn for solutions most relevant to their application space.
This slows adoption and weakens the overall sector.
The situation is similar to the auto industry 60 to 80 years ago when there were scores of new car companies in the U.S.
A wave of consolidation followed.
General Motors, for instance, came about through the merging of about a dozen different car brands, and nearly all of todayâs global automakers are themselves a collection of brands, having followed a similar strategy.
There are now only a handful of truly viable car companies worldwide.
And thatâs good for consumers because they know that only the automakers with the highest-quality production have survived.
Currently, in the 3-D printing market, customers have too many choices.
They canât easily understand which ones are the best.
But going forward, thanks largely to consolidation, customers in the 3-D printing market will find it easier to make the right choice quickly.
The industry needs the financial markets to reengage rationally in 2024, further helping to cull the herd and allowing the strongest 3-D printing companies to strengthen and accelerate.
3. 3-D innovation will create new markets.
As we think about the organic innovation spaces in 2024, growth will abound there, too.
We look with interest at the increasing demand for creating precise, small-scale objects via nanoscale 3-D printing.
One obvious use case for the technology is microelectronics, such as nanoscale transistors, sensors and memory devices.
One less obvious but equally exciting use case is biomedical, where there are incredibly promising applications in fields like nanomedicine, cancer treatment and drug delivery.
It is becoming increasingly clear that nanoscale 3-D printing and traditional additive manufacturing can play a significant role in personalized and other next-generation medicine.
Liquid metal printing is another interesting, comparatively low-energy technology that may rise to greater prominence in 2024.
This technique takes molten metal derived from commodity aluminum and wire feeds into the printing machine.
claims that liquid metal printing may be up to 10 times faster than other advanced manufacturing (AM) methods while remaining safer and more environmentally friendly, with a relatively low total application cost.
4. AI will play a growing role.
Though not yet as mature as large language model-based techniques, much effort has been made to develop smarter and more predictive model-based tools for AM.
I believe that in 2024, more 3-D printers will begin leveraging AI models that can be used to predictively identify adverse events and prevent print failures.
Building on this entry-level AI capability, AI will soon be able to tell in real timeâas a machine is printing a partâif there is a problem or not and then work to correct the print before completion.
This information will not only help correct that particular printing job but can also be fed back into the model to improve parametric design tools, further realizing the full benefits of AM.
Ultimately, just as assistive technology tools can today create 2-D images and graphics with tremendous ease, 3-D AI will be able to optimize the design of a 3-D-printed object by minimizing its weight and maximizing its strength, which will add significant value to 3-D printing overall.
Consider this final takeaway.
For years, market watchers have described 3-D printing as one of those industries “still in its infancy.”
To extend the metaphor, we can say that itâs now entering its adolescence.
The years ahead will see 3-D printing grow as it finds its strengths and learns where it can contribute.
Belief in the space is still robust as .
Overall, 2024 will be a pivotal and exciting period in 3-D printingâs maturation journey.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Generative AI (GenAI) is the talk of the tech world right now.
From Google to and announcing new GenAI features, everyone is eager to implement GenAI tools into their offerings.
GenAI is also making a splash in software development and mobile applications.
In fact, due to features like voice assistants, chatbots and facial recognition, the AI-driven app sector generated and is estimated to grow at a compound annual growth rate of 38.3% until 2028.
While GenAI can be an excellent resource to enhance user experience, personalization and more, itâs nothing without accurate and robust data.
AI models rely on data to learn patterns, make predictions and perform tasks.
If the training data is compromised, inaccurate or contains errors, the model can produce biased and unreliable results, leading to poor performance and user abandonment.
As GenAI models become increasingly popular, software developers must focus efforts on maintaining data integrity to ensure their AI-driven solutions are precise, effective and efficient.
Benefits And Challenges Of Data Integrity Overarchingly, data integrity is massively important as it enables organizations to avoid costly consequences, make better decisions, implement more reliable processes and reduce compliance issues.
It can also lead to better customer and stakeholder experiences, increased revenue and market share and reduced risk.
Without quality data, companies will have a hard time managing these increasingly complex applications and ecosystems.
For AI in particular, accurate and secure data can help optimize the software engineering process and create better AI-powered tools.
However, maintaining data integrity throughout the software development process has become somewhat of a challenge.
From collection to utilization, data moves through various data pipelines, which can lead to gaps, blind spots and mishandled and inaccurate data.
Itâs likely that data is collected from multiple sources, meaning it may have different versions or iterations and have passed through many different hands, all leading to discrepancies.
Further, when inaccurate or compromised data is uncovered, finding precisely where the data broke along these pipelines can be an expensive, time-consuming and frustrating endeavor.
When developers are given inaccurate or unreliable data, it can undermine the functionality and security of applications and lead to a host of issues, including poor user experience, security vulnerabilities and regulatory risks.
Fortunately, despite the challenges around maintaining robust and accurate data throughout the software development life cycle, there are ways to achieve data integrity.
Maintaining Data Integrity in an AI-Driven World In our new AI-powered world, maintaining data integrity is of the utmost importance.
There are three key aspects of a data-integrity strategy that software engineers must use to ensure their data is comprehensive and error-free: Testing is crucial to maintaining data integrity as it validates the accuracy and reliability of data.
Testing also confirms that data is complete and meets standards and ensures that integrity is maintained when changes or updates are made.
End-to-end, automated testing solutions allow developers to catch more data errors upfront and improve quality at scale.
These solutions also empower developers to test data, UI and API layers, regardless of data type, source or format.
Testing solutions also ensure developers can find data anomalies upfront, flag areas for improvement and keep pace with the ever-changing environment.
Allowing for real-time detection and response, data monitoring should not be overlooked.
The continuous monitoring of data sources and flows is critical to maintaining data integrity as it helps developers catch issues, errors and unauthorized changes as quickly as possible.
Through alert systems and comprehensive monitoring practices, developers can identify and respond to issues that can compromise data integrity.
They can also assess overarching system performance to ensure that data operations are running efficiently and accurately.
Management sets the framework for data integrity.
Data management establishes standards, rules and responsibilities for handling any and all data.
With effective data management practices, developers can maintain data integrity by ensuring data is handled properly throughout its life cycleâfrom creation to archiving to deletingâproviding the correct access and control authentications to prevent unauthorized changes and establishing a data backup and recovery process in case of any issues.
Ensuring Data Integrity Within Your Business Equipped with a better understanding of data integrity and the key components that make up a solid strategy, the next steps are to ensure software development teams are on the same page and put a plan into action: Is data being appropriately monitored and managed?
How is it being testedâmanually or with multiple-point solution tools?
If theyâre using tools, are the outputs accurate?
Are you looking to gain better insights from your data to make better business decisions?
Do you need data to avoid regulatory compliance issues?
Are you focused on a low-budget tool, a multi-tool offering or something with integration and network support?
Not all perform the same or tackle the same roadblocks, so picking the one that is right for you is critical.
Better Data, Better Outcomes
While the adoption of generative AI is on the rise, the linchpin to success lies in data integrity.
The accuracy and reliability of AI models are tied to the quality of the data they are trained on.
Without a foundation of accurate and robust data, GenAI cannot be the powerhouse tool we expect it to be.
Despite the challenges around maintaining data integrity throughout the software engineering process, organizations have key strategies at their disposal to ensure data integrity.
With comprehensive testing, real-time monitoring and robust data management practices, organizations can unleash the full potential of GenAI while upholding the trust and reliability of AI-powered solutions.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
The âwork gameâ is changing rapidly, and so are the expectations and needs of employers and employees.
Traditional values, loyalty and standard corporate-ladder growth by a focus on niche professional development as well as ad hoc, project- and .
This has changed how we view “work” and how companies engage talent.
More companies, for example, are embracing on-demand talentâthat is, the growing pool of freelancers, independent contractors, gig workers and other self-employed professionals who can be hired on a temporary or project basis.
As HBS and BCG reported, use on-demand talent platforms to access highly skilled workers, and view these platforms as somewhat or very important to their future competitive advantage.
Drivers Of On-Demand Talent To understand this shift, let’s first look at a few reasons why companies are looking to on-demand talent to solve their organization’s needs: Although the talent gap seemed to cool off this year, it remains significant.
In the U.S., for example, there are still , as of September 2023.
According to a recent Manpower survey, globally report difficulty filling roles, including IT.
Finding and hiring qualified tech professionals is not only hard but also time-consuming and costly.
Freelancers are also often very specialized in niche areas.
By using them at the right moment and with the right goals in mind, the on-demand workforce can provide companies with access to a large and diverse pool of experts in various fields, domains and technologies.
The gig economy and remote work have changed the preferences and expectations of workers, especially in the tech sector.
Many employees seek contract work because it offers more autonomy, flexibility, variety and control over their work-life balance.
In 2023, according to Fiverr, said they would either start or continue freelancing.
In 2021, an Upwork study found that to freelance say they earn the same or more money.
Fifty-eight percent of non-freelancers say they are considering freelancing in the future.
On-demand talent can also help companies scale their workforce as needed, without having to commit to fixed contracts.
This talent can help companies expand into new markets, geographies or niches by providing local insights, connections and cultural fit.
The pace of technological change and innovation is accelerating.
Companies need to keep up with trends, tools and solutions.
With the wide adoption of AI especially, the talent skill set gap can slow innovation.
Developing new products, services or features requires a lot of resources, expertise and agility, which may not be available in-house.
On-demand talent helps companies access the skills and knowledge they need quickly.
In the HBS and BCG study cited above, 40% of users of on-demand talent platforms reported that accessing highly skilled workers through digital talent platforms helped improve speed to market, boost productivity and increase innovation.
On-Demand Talent Best Practices While there are symbiotic benefits for both employees and employers to the on-demand talent model, companies looking to engage these employees may need to change their processes and adapt to this new world of work.
Here is a streamlined look at the process that can give companies the best chance of success: Before engaging on-demand talent, define the work and determine which type, level and amount of talent is neededâand for how long.
You need to set clear expectations, goals and deliverables for the work, and align them with the contractors.
To locate and select the best on-demand talent, you should use reliable platforms or services that can provide you with access to a local quality pool of vetted, verified and qualified workers.
For a smooth and successful kick-off, you need to provide on-demand with all the necessary information, resources and tools for their job.
You need to introduce and connect the on-demand workers with the in-house staff and foster a culture of collaboration, trust and respect.
Just as with employees, you need to manage and communicate with contractors and provide them with regular and constructive feedback and support.
Use effective and secure platforms, channels and tools to communicate and collaborate, as well as to monitor and track their progress and performance.
Make sure to give the right recognition, compensation and appreciation.
It’s not necessary to measure contract employees against long-term cultural fit or to focus in the same way on corporate values.
To succeed with on-demand employees, choose specific project and outcome-based performance indicators.
Align these goals with the specific job.
Conclusion On-demand talent is a powerful and strategic resource that can help companies scale their operations efficiently, especially in the tech sector.
By using on-demand talent, companies can access specialized skills and knowledge, speed up and agile their innovation, reduce their costs and risks and gain a competitive advantage.
However, companies need to adopt best practices and ensure that they have a clear strategy, process and platform for engaging, managing and integrating the on-demand workers.
Doing so can create a hybrid and flexible workforce that can meet current and future needs and achieve business goals.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
MENLO PARK, CALIFORNIA – NOVEMBER 3: Crowds are gathered outside of Meta (Facebook) Headquarters to Meta is systematically suppressing pro-Palestine content on Instagram and Facebook, a report from Human Rights Watch claims.
In ,â the campaign group alleges that the company has been removing or suppressing peaceful expression in support of Palestinians, along with public debate about Palestinian human rights.
The reason, the Human Rights Watch report said, is a mixture of flawed policies, implemented inconsistently and erroneously, and an over-reliance on automated moderation tools, along with undue government influence over content removals.
“Metaâs censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestiniansâ expression,” said Deborah Brown, acting associate technology and human rights director at Human Rights Watch.
“Social media is an essential platform for people to bear witness and speak out against abuses, while Metaâs censorship is furthering the erasure of Palestiniansâ suffering.”
HRW said it has identified several key patterns of censorship, each of which was spotted more than 100 times.
These include content removals, suspension or deletion of accounts, inability to engage with content, inability to follow or tag accounts, restrictions on the use of features such as Instagram/Facebook Live, and shadow banning, whereby an individualâs posts, stories, or account are made less visible without notification.
“In hundreds of the cases documented, Meta invoked its Dangerous Organizations and Individuals policy, which fully incorporates the United States designated lists of âterrorist organizations,â” HRW said.
“Meta has cited these lists and applied them sweepingly to restrict legitimate speech around hostilities between Israel and Palestinian armed groups.”
In more than 300 cases, it says users were unable to appeal content or account removal because the appeal mechanism malfunctioned.
Meta has been approached for comment.
The report follows widespread suspicion about the way the platform is handling content related to the conflict.
Earlier this month, for example, Senator Elizabeth Warren (D-Mass.) to the Meta CEO Mark Zuckerberg, expressing concerns and asking the company for more information on how its policies were being applied.
“Amidst the horrific Hamas terrorist attacks in Israel, a humanitarian catastrophe including the deaths of thousands of civilians in Gaza, and the killing of dozens of journalists, it is more important than ever that social media platforms do not censor truthful and legitimate content, particularly as people around the world turn to online communities to share and find information about developments in the region,” Warren wrote.
“Social media users deserve to know when and why their accounts and posts are restricted, particularly on the largest platforms where vital information-sharing occurs.”
Meanwhile, the independent Meta Oversight Board last week that the company’s use of automated tools to for content moderation increased the chances that valuable posts about the human suffering on both sides of the conflict would be wrongly removed.
“Social media platforms like Facebook and Instagram are often the only vehicles during armed conflicts to provide information, especially when the access of journalists is limited or even banned,” said Michael McConnell, a co-chair of the board.
Two years ago, following a recommendation from the Oversight Board, Meta commissioned the independent Business for Social Responsibility to investigate whether Facebook had applied its content moderation in Arabic and Hebrew, including its use of automation, without bias.
But it didn’t go well for Meta, with BSR concluding that the company’s actions “appear to have had an adverse human rights impact… on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred”.
All these critics are calling for changes on Meta’s part.
It should overhaul its Dangerous Organizations and Individuals policy to bring it into line with international human rights standards, audit its content removal policies and conduct due diligence on the human rights impact of the temporary changes to its recommendation algorithms that it introduced in response to the recent hostilities, says HRW.
“Instead of tired apologies and empty promises, Meta should demonstrate that it is serious about addressing Palestine-related censorship once and for all by taking concrete steps toward transparency and remediation,” said Brown.
Meta has been approached for comment but has not yet responded.
Did you know that, as of 2020, more than are generated daily, according to the World Economic Forum?
Many companies are struggling to manage that data.
In an interview hosted by IBM, renowned data leaders Srinivasan Sankar and Dr. Rania Khalaf asserted that companies should : âKeeping a handle on critical and regulated data elementsâsuch as names, addresses…and moreâis essential to running various business systems without duplication errors, unreliable searches or privacy breaches.â
In other words, implementing and scaling address verification across multiple countries can be incredibly difficult to execute manually.
This may be true.
However, several factors must be considered before implementing a global address validation system.
There are multiple ways to do standardization effectivelyâor, more importantly, numerous complexities that should be avoided.
Data Quality Complexities To Consider Our planet features approximately , and by at least half of the world’s population.
Multiplying those stats with our diversified histories, cultures, geographies, government styles and laws, it’s no wonder how many companies lose control of their address data.
It’s also easy to understand why validating global addresses is complicated.
Here are a few types of address complexities impacting organizations.
A plethora of language variations and character sets exist in the world.
Left unchecked, special characters, accents and non-Latin scripts cause frequent errors in data entry and validation processes.
International address data handling is subject to a complex web of regulations, including data privacy laws like GDPR in Europe, LGPD in Brazil, PIPA in South Korea, APPI in Japan and PIPEDA in Canada.
Ensuring compliance with these regulations while validating and storing addresses is a significant challenge.
International address databases typically contain inaccuracies or outdated information as street names, cities, postal codes and other address components change.
This leads to failed deliveries, customer dissatisfaction and increased operational costs, including added labor costs, potential loss of product, package reshipment costs and more.
Worldwide address validation must account for specific geographic nuances and edge cases, such as islands, remote areas or regions with non-standard addressing systems.
Geocoding and mapping challenges also arise.
Understanding local customs, naming conventions and cultural norms is important when validating international addresses.
A mistake in addressing can be seen as a sign of disrespect in some cultures, and this can cost you future business with them moving forward.
International addresses often contain more components than just the street and postal code, including building names, floors and unit numbers.
This is especially true in high-population areas such as Cairo, Beijing and Paris.
Managing and validating these components can become incredibly complex.
For example, having lived in both China and Germany myself, their addresses are quite different.
These structures vary greatly depending on their intended recipient.
Integrating international address validation into existing systems can be challenging due to differences in data structures and APIs between countries.
How To Manage Global Address Verification Global address validation plays a vital role in confronting these complexities with grace, effectively enhancing customer experiences and operational efficiency.
Ensuring data accuracy and consistency across borders is likely a daunting task, though, especially when there are some potential downsides to global verification.
Some address validation software might be complex and not user-friendly, requiring extensive training or expertise to utilize effectively.
Additionally, relying solely on software without human verification can lead to over-dependency and blind trust, potentially causing errors to go unnoticed or uncorrected.
Others may simply be concerned with the ability of the program to keep up with change worldwide.
While these are all legitimate concerns, asking potential software vendors these questions can help.
1.
What type of coverage do they provide?
Does it include the countries your business requires?
2.
Do they support address transliteration in non-Latin languages?
3. Is their software frequently maintained and updated for accuracy?
4.
Do they provide a globalized address autocomplete that will validate the address at every entry point, including editing?
5.
Do they offer an API interface to integrate validation directly into your systems for ease of use and implementation?
6.
How are their documentation and support?
Do they offer supported SDKs to facilitate implementation?
Do they offer timely phone, email or chat support?
7.
What kind of uptime and response time are promised in their service level agreement?
Answering of these questions should help you find a company to meet your needs and hopefully overcome the pitfalls.
But perhaps your company is not ready for a grander scale of globalized verification.
Beyond adopting this kind of software, there are internal steps you can take to ensure cyber compliance and improve your data hygiene.
Have a dedicated team periodically review and cleanse your data.
Introducing more structured spreadsheets or CRMs may be a useful route to organizing and standardizing your data manually.
This can help to eliminate invalid information from being entered into your system.
Finally, regularly checking in with your customers to get updated information is a must.
Itâs also a way for your information to be verified repeatedly rather than only at the initial entry point.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
With human error of all cybersecurity incidents, employees are inevitably the weakest link in any organization’s security chain.
From falling for phishing emails to using a weak password or sharing sensitive data via unencrypted channels, even a small lapse in judgment can open the door to costly and disruptive attacks.
As a result, firms are increasingly prioritizing end-user cybersecurity training and awareness programs.
Regulatory bodies are also stepping in to mandate these initiatives.
In the U.K., for example, the Information Commissioner’s Office (ICO) now all organizations to demonstrate completion of cyber awareness training by all new starters, ongoing training for all employees and management of non-attendees.
Organizations are realizing the need to build cybersecurity into the employee experience, but how many are delivering security education and awareness programs that truly connect with the daily responsibilities and experiences of their staff?
Often, cybersecurity training becomes just another checkbox-compliance exercise.
Organizations give employees some brief PowerPoint presentations or the same dry instructional materials year after yearâresulting in disengaged staff members who multitask or simply tune out while waiting for the end-of-training quiz.
As Gartner, Inc. in its top cybersecurity trends for 2023, human-centric security design is becoming an increasingly important factor in cybersecurity programs.
Without a user-centric, role-based approach that focuses on the employees’ perspectives and challenges and genuinely engages them, even the most well-thought-out cybersecurity training programs will ultimately fail to stick.
Building a security-conscious culture then becomes much harder, leaving organizations more vulnerable to evolving cyber risks.
Fighting Security Fatigue Cybersecurity is an ever-changing picture.
As new technologies and applications evolve, new attack vectors are constantly emergingâwith rapidly changing policies and procedures adding to the confusion for many end users.
This stimulates a continuous cycle of improvement and adaptation.
Constant exposure to security warnings and policy rules at work can overwhelm employees, with all of the information eventually blending into background noise.
What’s more, some of the measures that organizations implement to mitigate cybersecurity risk may hinder productivity or even introduce new security vulnerabilities.
For example, a firm may introduce multifactor authentication (MFA) to reduce the likelihood of phishing attacks.
While most employees will gradually adapt to this new technology, they may face a barrage of push notifications designed to trick them into authenticating fraudulent login attempts.
This constant game of whack-a-mole can be just as exasperating for employees as it is for security leaders.
Cybersecurity also often conflicts with our natural human instinct to trust, urging us to adopt a more skeptical mindset.
This can jar and tire staff members, particularly when they’re already balancing the numerous demands of day-to-day business.
As we’re all more prone to making mistakes when stressed or weary, it comes as little surprise that security lapses can easily occur.
Security fatigue poses a significant challenge for both individuals and firms.
To tackle this and reinvigorate staff vigilance, firms must put the human experience first.
This includes providing employees with informative resources and leadership support.
Leadership teams can encourage employee involvement by actively promoting the significance of a cyber-conscious culture.
They should adapt and implement engaging cybersecurity training programs that are tailored to specific job roles and responsibilities, empowering employees to effectively manage cybersecurity risks without overwhelming them.
Reinforcing Culture
And Making Security Stick It’s not enough to schedule regular cybersecurity awareness sessions.
Training must be compelling, interactive and relevant to employees and their daily tasks to facilitate better retention of knowledge.
Learned security principles and practices can then become ingrained in everyday behavior, supporting the growth of a cyber-conscious culture across the organization.
Firms may choose to add the fun factor through gamified experiences, or they may use impactful storytelling to illustrate the far-reaching consequences of clicking on a phishing link.
Simulated social engineering attacks can be particularly effective, as they closely mimic real-world scenarios.
When employees experience these situations firsthand within a controlled environment, they gain a deeper understanding of the risks they may face in their day-to-day work, prompting them to adapt their behavior accordingly.
Empowering employees with knowledge and skills also positively boosts their confidence in recognizing and mitigating cybersecurity threats.
The key is to incorporate modern, interactive and personalized training techniques that help users connect the dots between their actions and the overall security of the firm.
Organizations should constantly update training to reflect the evolving cybersecurity and regulatory landscape as well as employees’ changing use of technology.
Firms that solely rely on outdated web-based learning management systems to deliver their cybersecurity training will struggle to keep pace with the security impacts of chat-based applications like Slack and Teams.
Crucially, organizations must proactively gather continuous feedback from the user population to ensure that all aspects of the program align with employees’ specific needs, challenges and concerns.
Organizations should also regularly report cybersecurity training progress to C-level executives to encourage business-wide awareness and accountability.
Balancing Training And Operational Demands Within the many competing distractions and demands of day-to-day business, it can be tricky for organizations to balance employee productivity with the requirements of ongoing security training.
However, firms must strike this balance.
Cyber risk is a business risk, and any successful cyberattack can have devastating business impacts.
Firms must incorporate cybersecurity training into their security budgetâcarefully assessing areas of risk, determining appropriate time allocation and assigning dedicated personnel to ensure its effectiveness.
They should also consider strategic partnerships with third-party providers that specialize in human-centric cybersecurity training to enhance the quality and effectiveness of their programs.
By putting the employee experience first, firms can tailor their security training to create customized, relevant and long-lasting teachable moments.
This human-centric approach empowers employees to proactively strengthen their organization’s cybersecurity and overall resilience.
Firms investing in a multilayered security training program that considers these unique needs are taking crucial steps to build a strong and sustainable security-conscious culture.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
As the world progresses, the types of engineers required continue to evolve.
The earliest known occurrence of an engineer comes back from ancient times, with military engineers being responsible for designing and building fortifications, siege engines and other military structures.
Fast forward thousands of years, and you start to see civil engineers being introduced.
We then move into the digital age, and software engineers cement their spot in the world of engineering.
Now, with AI taking the world by storm, the next type of engineering skill that is becoming high in demand is prompt engineering.
What Prompt Engineering Is Prompt engineering is the practice and process of crafting strategic questions to ultimately reach your desired output when leveraging a language model or other machine learning systems.
If we look at prompt engineering when it comes to ChatGPT, a reference many of us are now familiar with, prompt engineering involves the effort of typing questions into the chat bar.
The skill of prompt engineering may seem trivial, but it truly is a skill.
An effective prompt engineer understands the language model’s capabilities, limitations and tendencies, tailoring the input prompt to guide the model toward generating the desired content.
A good prompt engineer is someone who can retrieve their desired outcome with their first effort.
The skillset needed by a prompt engineer will differ depending on what you’re looking to receive from your AI application.
If you’re looking for the answer to “What is 2+2,” well, that is an easy prompt with a very straightforward answer and, therefore, doesn’t require much engineering.
However, imagine you need your AI model to spit out code that allows you to perform certain actions while avoiding specific errors.
To receive your desired output, you need to ask the question in a certain way so that the language model will not only be able to understand but will be able to return the exact outcome you are looking for.
Applications Of Prompt Engineers There are many reasons you’d want a prompt engineerâsomeone focused on the most efficient outcome when it comes to leveraging AI models.
Some applications can include controlling creativity, fine-tuning responses, mitigating bias and enhancing accuracy.
Adjusting prompts allows users to control the level of creativity or specificity in the generated content.
As the prompt engineer, you can determine how detailed you want the reply to be.
The deeper your instructions are, the more creatively you can influence a unique output.
By tweaking the words or sentence structure of a prompt, you can influence the model to provide more contextually accurate responses.
For example, if you’re writing a paper on the history of the invention of the wheel, you probably will want to know the founding of the wheel across different civilizations.
This requires you to engineer your question to ensure the AI model knows you’re looking for multiple answers.
Prompt engineering can be used to reduce biases in model outputs by leading the questions to return a more objective reply based solely on facts.
You can encourage models to ignore certain voices or look specifically for objective references.
Crafting specific and targeted prompts will help you generate more accurate outputs.
The more specific and tailored your question, the more value you’ll get out of the effort.
Skepticism Of Prompt Engineers
It’s always important to understand why individuals may be bearish on prompt engineers in the rapidly developing world of AI.
According to a article, prompt engineering is simply not the future.
The article outlines how, in today’s world, where we are at with AI‘s advancement, prompt engineering is useful and important.
However, as technology advances, AI models, too, will advance and become more intuitive.
This means that, inherently, over time, the value of a good engineer could be replaced by more sophisticated AI mechanisms.
Wrapping It Up All in all, while the future doesn’t have a definitive roadmap, what we can discern from the information available to us is that in today’s world, prompt engineering is a valuable skill to harness.
While technology may make this role less impactful as time goes on, it currently retains value and can help position you as a stronger employee or job candidate.
Put simply, this is the science of asking really good questions.
So even if the role of prompt engineering diminishes, the skillset of knowing how to ask good and targeted questions is an attractive trait to embrace.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
A robotic hand and human hand coming together.
Digital Twin technology has gained an incredible amount of traction in the last few years.
Though quite challenging to execute in a meaningful way, the concept is relatively simple: a digital twin is quite literally a digital version, model, or representation that is meant to replicate the constitution, nuances and behavior of a real-world physical counterpart.
The idea seems quite futuristic; however, digital twins have been utilized across numerous industries for decades.
For example, one of the most famous use-cases of this concept was by , when the organization created and used multiple simulators to model the events that caused Apollo 13âs oxygen tank explosion, so that the team could learn from it to prevent future mishaps.
Since then, digital twin technology has become prolific across industries, and its impact in healthcare is just getting started.
A recent by the National Academy of Sciences discusses the value of digital twin technology and how it can radically transform healthcare by enabling access to more granular and deeper insights on patient care.
Dr. Karen Wilcox, Ph.D., Director of the Oden Institute for Computational Engineering and Sciences at the University of Texas at Austin and chair of the committee that wrote the report, thoughtfully the immense promise that digital twin technology has in bridging key gaps in science, technology, medicine and more.
The report explains that to do so, however, government agencies, foundations and healthcare leaders have to prioritize and invest in the research and development for the technology.
The use cases for the technology can be tremendously impactful.
For example, a digital twin recreation of a tumor can be used to test new clinical decisions, treatment modalities and different therapeutic interventions.
Dr. Caroline Chung, M.D., an established radiation oncologist, Chief Data Officer at M.D. Anderson Cancer Center and a co-author of the report, enthusiastically explains that digital twin technology has a lot of potential.
For one, âdigital twins of individual cells, organs or systems have the potential to inform decisions and to reduce the time and cost of discovering and developing new drugs and therapeutics.â
Furthermore, taking a wider-scope for the technology, she explains how âdigital twins of organizations and operational processes can help inform decisions to improve the effectiveness and efficiency of cancer centers to allow more patients to access care and/or reduce costs.â
However, Dr. Chung also explains that there is still a lot of work left to be done in this arena, and more attention and collaboration is required across domains.
For example, she explains that a standardized definition of a digital twin needs to be establishedâone which âencompass both the virtual representation and physical counterpartâ and delves into the nuances of the human-digital twin bidirectional interaction.
Representation of a cell line injection.
Notably, the private sector has recognized the immense potential value of this technology and has invested billions of dollars to develop it.
In fact, the global digital twin market is to grow to $110 billion by 2028 at a CAGR of nearly 61%, showing immense interest in this sector.
Especially with the democratization and influx of artificial intelligence, companies like and are rapidly ramping up efforts to cultivate this technology.
This effort to scale the technology is especially gaining steam in the realm of healthcare delivery.
Take for example , which recently secured $50 million in funding, and has created the âWhole Body Digital Twinâ service.
Twin Healthâs mission is straightforward: use digital twin technology to help patients mitigate and reverse chronic metabolic disease.
Specifically, the company partners with employers and health plans to offer members the ability to create a digital twin.
Using company provided hardware (i.e., sensors), routine blood tests and self-inputted data into a mobile application, a digital twin of the patient is established.
Leveraging artificial intelligence and proprietary foundation models in conjunction with this digital twin, the application then provides insights regarding the patientâs nutrition, physical activity, sleep metrics, etc.
These insights can be reviewed by the patient and the physician for a deeper and more granular approach to improving healthcare outcomes.
Jahangir Mohammed, Founder and CEO of Twin Health, explains that while digital twin technology has been used across other industries for many years, his inspiration was to create a viable version for the most complex and intricate machine in the world: the human body.
He thoughtfully explains that the organizationâs goal is to provide patients with a way to truly improve their health, including for the most prevalent metabolic conditions such as obesity, diabetes, hypertension, etc. Broadly speaking, there is much more to come on this front for healthcare.
In a journal article published last year, the authors discuss how the combination of artificial intelligence, internet of things (IoT) and closed-loop optimization (CLO) âenables digital twins to offer predictive abilities beyond the traditional âpredictorâ technologies that currently exist, e.g. ICU IoT-enabled comprehensive physiological monitoring.â
The massive amount of investment currently taking place, especially in AI and IoT, indicates that the work with digital twins is just getting started.
Undoubtedly, the world will have to move cautiously with regards to this technology.
For one, strict data, privacy and safety standards will have to be implemented.
Furthermore, it will be critical to maintain high standards of validation and a âhuman in the loopâ throughout any iterations of this technology for patient care; that is, while digital twins may provide insights, strict validation measures, data checks and human insight and judgement must be kept at the forefront of any decisions made.
However, if done correctly and scaled appropriately and safely, this technology has tremendous potential to transform healthcare.
Source: forbes
No Comments