<article>
In a nutshell: Figma is a recently incorporated company providing a web-based interface design platform focusing on real-time collaboration.
Adobe was pursuing an acquisition of its most intriguing competitors in years, but the attempt ultimately failed due to antitrust scrutiny in the Europe.
Adobe has lost interest in acquiring Figma.
The company was willing to pay $20 billion to buy the product design startup, but regulatory watchdogs from the EU and UK opposed the deal.
Antitrust laws are designed to preserve market competition, but Adobe said that the proposed remediations were unacceptable and “wholly disproportionate.”
Adobe and Figma negotiated the deal during the COVID-19 pandemic, significantly increasing worldwide technology and software investment.
The potential deal was eventually announced in September 2022, revealing that Adobe was willing to pay 50 times Figma’s annual recurring revenue and double the company’s latest private funding round in 2021.
Since the announcement, the two companies have fought battles on multiple fronts, with antitrust authorities trying to halt the sale.
The European Commission believed the merger could “significantly reduce” competition in global markets.
The EU’s Competition Commissioner Margrethe Vestager said the Figma acquisition would have prevented “all future competition” between the two companies, leading to less choice, reduced quality, and higher costs for consumers.
The UK Competition and Markets Authority (CMA) was equally concerned with Adobe’s proposal.
The agency proposed alternative “remedies” in November, forcing Adobe to abandon the deal or eliminate overlapping business products such as Illustrator or Photoshop.
Alternatively, the CMA could have forced Figma to sell off its core product, Figma Design, under the proposal.
Figma CEO Dylan Field told the Financial Times that the suggestion of “buying a company so that you can divest the company” was “quite amusing.”
Reading the CMA proposal was like reading a punchline to a joke, Field said.
Figma’s boss was disappointed with the outcome.
The situation ultimately forced Adobe to abandon the acquisition as there was “no clear path” to satisfy UK or EU regulators’ conditions for approval.
Provisional findings from the CMA contained “serious errors of law and fact,” Adobe and Figma said.
Regulators were influenced by an “irrational approach” to the gathering and appraisal of evidence.
Officials required divestment of a multibillion-dollar business (Photoshop, Illustrator) to address an “uncertain and speculative” theory of harm to competition, a wholly disproportionate reaction to the now-failed deal.
Priced from $349 – $600, these new video smartglasses are a whole new category of Assisted Reality, which traditionally means is putting a monitor in or around your field of view, like the original Google Glass.
The new display glasses put a big screen right in front of you.
It’s the only thing you see, and the screen is big, like 45% of your field of view.
They’re all great for media consumption, playing games, or screen expansion for mobile productivity.
Instead of looking down at a screen in your hands, you can now sit up and look ahead.
If you’ve got a geek on your list, they’re really going to like these.
New TCL RayNeo Air 2 video smart glasses for a big scrteen experience on-the-go. .
$349.
A smart phone viewing accessory that delivers a 201″ hi-def screen on the go.
We love these glasses, but compatibility is an issue.
If you have an iPhone 14 or any with a lightning port, you’re going to need the $99 to go with it.
The $499 XReal Air2. .
$339.
The biggest, brightest screen this new category of mobile phone accessories delivers.
I recommend buying it with the $119 , a needed iPhone adaptor which allows you to place the screens of varying sizes around the room.
Rokid Max AR Smartglasses $399.
One of the best is also the most expensive, but we love the $199 , a great Android device for streaming, and viewing downloaded videos without wi-fi.
The light weight Station controller makes media consumption on the Rokid Max feel more like the experience of watching TV on your living room sofa.
$395.
I first me Virture at CES last year, fresh off their win as one of Time’s best inventions of 2022.
It was plug and play with my iPhone 15 and the old Samsung Galaxy 10 worked well, too.
The screen dimming feature, the focal adjustments, and the outstanding Harmon audio are best-in-class.
The Spacewalker app turns the Virture into more of a 3DOF experience, and makes the phone your controller.
Check device compatablity, as some iPhone and iPads require an adaptor.
The Solos AirGo3’s detachible wing allows for interchangeable styling. .
$299.
There’s no display or camera on these smart smartglasses, which use AI as their operating system.
Ray-Ban Stories.
$299.
Our friend, Moor Insights analyst Anshel Sag, took great care with this lengthy review of Meta’s new iteration of Ray-Ban Stories: “The latest collaboration between Meta and Luxottica marks a significant improvement over their predecessor, Ray-Ban Stories.
These glasses, powered by Qualcomm’s AR1 Gen 1 platform, offer enhanced features like superior camera capabilities (12MP photos, 1920 x 1440 video), increased storage (32GB), and advanced wireless connectivity.
Notable upgrades include IPX4 water resistance, a sleek design, and a battery life of 36 hours with the charging case.
The inclusion of Meta AI facilitates voice commands and messaging, enhancing day-to-day usability.
Despite some areas needing improvement, such as video frame rate and continuous use battery life, the Ray-Ban Meta smart glasses represent a major step forward in wearable technology, combining style, functionality, and a strong value at $299, signaling Meta’s leadership in the evolving smart glasses market.”
In brief: Pour one out for the GTX 16 Series.
According to new reports, Nvidia has decided to officially discontinue its entire line of GTX 16xx graphics cards that were first announced in 2019 and remain popular among Steam users.
It’s been over a year since Nvidia launched the last of its GTX 16xx graphics cards, the GTX 1630, a product we awarded a score of 20 and called an insult to gamers , which could explain its very short life.
According to the Chinese Board Channel Forums , Nvidia is expected to stop production of all GTX 16-series cards starting in the first quarter of next year – in a few weeks, essentially – with the final orders being completed this month.
Nvidia has already officially, or in some cases reportedly, discontinued most of the GTX 16xx cards, meaning the upcoming series discontinuation will only impact the GTX 1650 and GTX 1630; the latter is a cut-down version of the former.
The GTX 16 series is based on the same Turing architecture that launched a year earlier in the RTX 20 Series, which was more expensive and came with the likes of hardware-accelerated ray tracing and DLSS as standard.
While it might not have been as technologically advanced, the budget GTX 16 series has proved to be very popular with gamers: it was only in September when the RTX 3060 knocked the GTX 1650 off its long-held top spot on the Steam Hardware and Survey GPU chart.
The older card remains in second place and even saw its user share increase 1.13% last month.
Expect to continue to see the GTX 16-series cards available to buy until AIB and retailer inventories sell out – there are still plenty available from Amazon and Newegg.
Don’t be surprised to see their prices fall significantly over the coming months, too, and more GTX 16xx cards appearing on eBay as owners trade them in for newer models.
As we noted in our Best GPUs feature , there really aren’t many options left for anyone looking to buy a good sub-$200 GPU these days.
Our recommendation is the AMD Radeon RX 6600 or Radeon 6650 XT.
The former’s cheapest non-refurbished models on Newegg are $199, and they’re either discounted or are from lesser-known brands.
Permalink to story.
https://www.techspot.com/news/101206-farewell-gtx-16-series-nvidia-end-all-production.html
The agreement was struck in September between the parties but was filed late Monday in a federal court in San Francisco, our colleague .
The settlement and allocates a $630 million payout for U.S. consumers who used a payment system in Google’s Play store.
The states allege the system magnified prices for in-app purchases.
Google’s store collects on in-app purchases.
“The Complaint also alleges state-law antitrust and consumer protection claims.
Specifically, the States allege that Google has unreasonably restrained trade and monopolized Android app distribution and payment-processing services through anticompetitive conduct,” the settlement agreement states.
Those who are eligible will .
Additional payments may be issued depending on how those eligible spent on Google Play from Aug. 16, 2016, through Sept. 30, 2023.
Google will also need to in penalties and other costs.
The settlement needs to be approved by a judge.
Google’s vice president of government affairs and public policy Wilson White in a blog post said the settlement “builds on Android’s choice and flexibility, maintains strong security protections, and retains Google’s ability to compete with other OS makers.”
W Thank you for signing up!
Subscribe to more newsletters Copyright 2023 Nexstar Media Inc.
All rights reserved.
This material may not be published, broadcast, rewritten, or redistributed.
An estimated 260 million packages disappeared in the U.S. last year, according to Safewise, many taken right from the front door area while a camera recorded the theft.
As the holiday season kicks into high gear, package thefts are a top concern, and one shipping company is using artificial intelligence to combat porch pirates.
Nearly one in four adults had a package stolen in the last 12 months, a survey by Finder said.
Theft can be an even more serious problem if those packages contain necessities, like medication, or expensive items.
“This time of year, we ship a lot of gifts, so every package is very special to the person receiving it,” said Tarek Saab, president of Texas Precious Metals, whose company ships items like silver bars and gold coins.
This year, Saab is using a new UPS data program called DeliveryDefense, which he says helps them identify addresses that are likely targets for theft.
UPS gave CBS News a look at how the program works.
The A.I.-powered program takes a recipient’s address and produces a score — a higher score indicates a higher likelihood of a successful delivery.
The scores are created using years of data from previous deliveries and other factors.
For addresses with a low score, the merchant can reroute the item, with the customer’s approval, to a UPS Store or other pickup locations.
“About 2% of addresses will be considered low confidence, and we’re seeing that represents about 30% of losses our customers are having,” Mark Robinson, president of UPS Capital, told CBS News.
At Texas Precious Metals, Saab believes the technology can reduce those numbers.
“We recognize it’s computers versus criminals, and we have to use every tech capability that we have to try to circumvent any challenges we might run into,” he said.
Janet Shamlian Janet Shamlian is a CBS News correspondent based in Houston, Texas.
Shamlian’s reporting is featured on all CBS News broadcasts and platforms including “CBS Mornings,” the “CBS Evening News” and the CBS News Streaming Network, CBS News‘ premier 24/7 anchored streaming news service.
Twitter Instagram
Once upon a time, there was a market for sentiment analysis across multiple vertical industries.
Companies wanted to know what clients thought about ther products, services and even their “attitude.”
Did their clients like them, loathe them, empathize with their struggles?
Social media sentiment analysis was a way to expand insight into what clients do and think beyond traditional customer interviews and surveys.
Data was collected from newspaper clips, blogs, posts, headlines, articles and speeches – an expanded view of the customer.
But not a complete view.
Fast forward a decade and a truly holistic approach to investing is the raison d’etre of .
Many of us who have championed the role that so-called “soft” variables play in decision-making have been waiting for a comprehensive approach to problem-solving – in this case investing – for decades.
Highmoon describes its approach this way (the italics are mine): Highmoon blends all this with “traditional” market metrics to generate a complete picture of market conditions that generates informed investment decisions.
It’s amazing how many market analysts cite “emotion,” “psychology,” “sentiment” and other soft variables as the drivers of company and market performance, but fail to integrate them into their assessments or predictions.
Decades ago there were clear technological barriers to measuring the importance of these variables, but today we have generative AI (GenAI) and other tools that enable us to integrate hard and soft variables for improved investment decision-making, risk reduction, and even market projections.
: Lumine Lin, Founder & CEO, Highmoon Capital As the Highmoon founder and CEO – Lumine Lin – explains: But Lin’s most important statement?
The marriage between soft market forces and GenAI is a smart one.
While it remains to be seen how emotionally smart GenAI can be – how fast, if at all, it reaches general AI (GAI) – there’s no question that Highmoon’s approach is the right one.
It’s also one that will reveal some unexpected investment results as GenAI’s capabilities grow.
Said a little differently, the timing of all is this excellent as the capabilities of GenAI will only grow — and grow quickly.
The key to all this is the research agenda that emerges from a holistic approach to investing.
The premise of Highmoon’s approach is that there’s a large – and growing – family of variables that drive financial performance.
When that’s the assumption, research yields never before seen nuances.
Well-bounded problems with pre-packaged solutions are redefined into broader, fuzzier problems that require a set of new methods, tools and techniques that have not been in the traditional arsenal of financial analysts.
Highmoon is pioneering the application of GenAI to enable its holistic approach to market analysis and the deployment of capital.
At least two aspects of this are unique: the broad definition of market drivers and the application of GenAI.
The integration of cognitive biases, emotional influences, stock market sentiment, traders’ emotional intelligence and psychological factors with traditional financial metrics is what many of us have been hoping would transform the investment thesis.
The journey from social media analytics to GenAI-enabled holistic investing has been a long one — but it’s begun at Highmoon.
Stay tuned.
Marma charging rack for micromobility Standab, the Swedish micromobility infrastructure start-up, plans to roll out parking bays for bikes and e-scooters that can charge the batteries in the vehicles.
Marma is a parking rack for the likes of e-scooters that is equipped with charging technology to charge the electric batteries while the scooter or bike is docked.
The Swedish start-up has been making parking infrastructure since 2018 but the latest product marks a move into charging, chief executive Marcus Adolfsson said.
“We saw the natural development of this was to start charging.
When they’re parked why are they not charging?” he said.
Marma was initially piloted in Stockholm, where Standab is based, with Dott, one of Europe’s largest micromobility companies.
“We are always looking for ways to improve our integration into cities,” Laurent Kennel, chief development officer at Dott, added.
“This pilot with Standab has shown that a charging solution can be combined with neat, organized parking in city centers, to ensure riders can always find a vehicle which is ready to use, whenever they need it.”
With the findings from the pilot, Standab now plans to expand the new product to between four to seven European cities in 2024 and is in talks with many of the major operators around the continent.
Collaborating with operators is a key issue, Adolfsson said.
“We saw that the main issue was the compatibility, it has to be compatible with everyone.
Cities are not interested in having, for example, Tier come with their charging solution when there are three or four more players there.
It has to be compatible
[with] everyone.”
Marma can solve issues for both the operators and city authorities, he added.
Parking of e-scooters has been a major bugbear in cities with reckless parking or discarding of vehicles, which can cause injuries, being a hot button issue.
This was a major factor behind Paris’s rental services earlier this year.
“The cities have a real issue that we’re solving and it’s of course the parking issue,” Adolfsson said.
“The good thing here is that we are all interested in the same areas, it’s the high footfall areas.
The operators, that’s where they have parking problems as well, and it’s also where they have the highest rotation of scooters.
Of course they want their fleet to be topped up [with energy] as much as possible.”
For the e-scooter operators, they can keep more units on the streets with enough juice while appeasing the concerns of city officials.
This, Adolfsson added, can be more cost effective than the practice of swapping out drained batteries in the fleet.
“Swapping a battery is extremely expensive for the operators, it’s a high manual cost and we’re going to be able to deliver that same amount of electricity into the battery at a much lower cost than manual swapping,” he said.
According to Standab, the Marma stands can charge a battery about 20% to 25% in an hour.
“If you take a scooter that has been there for an hour, you get an extra 25% of battery which will give it more or less an extra day of operations on the streets.”
Standab generates revenue from the operators who pay for the electricity used plus a premium.
“Cities get the infrastructure for free.
The operators get the infrastructure for free as well as the connectors and everything.
We are more charging as a service, the operators pay for exactly what amount of electricity is going into the scooter with a premium there.”
Adolfsson said that Standab has deployed 1,700 of its previous parking racks in dozens of European cities and hopes to replicate that with Marma.
He is joined in running Standab by Eric Bergqvist, who previously held senior public policy roles at Voi, one of Europe’s largest e-scooter players.
Several other former Voi executives have also joined the company.
Adolfsson said that the start-up has been largely self-financed to date but will explore opportunities from outside investors to fund the expansion of Marma.
Back Print
By Taiwo Adebayo – Associated Press – Wednesday, December 20, 2023 OMO FOREST RESERVE, Nigeria (AP) —
Men in dusty workwear trudge through a thicket, making their way up a hill where sprawling plantations lay tucked in a Nigerian rainforest whose trees have been hacked away to make room for cocoa bound for places like Europe and the U.S. Kehinde Kumayon and his assistant clear low bushes that compete for sunlight with their cocoa trees, which have replaced the lush and dense natural foliage.
The farmers swing their machetes, careful to avoid the ripening yellow pods containing beans that will help create chocolate, the treat shoppers are snapping up for Christmas.
Over the course of two visits and several days, The Associated Press repeatedly documented farmers harvesting cocoa beans where that work is banned in conservation areas of Omo Forest Reserve, a protected tropical rainforest 135 kilometers (84 miles) northeast of the coastal city of Lagos in southwestern Nigeria.
Trees here rustle as dwindling herds of critically endangered African forest elephants rumble through.
Threatened pangolins, known as armored anteaters, scramble along branches.
White-throated monkeys, once thought to be extinct, leap from one tree to the next.
Omo also is believed to have the highest concentration of butterflies in Africa and is one of the continent’s largest and oldest UNESCO Biosphere Reserves.
Cocoa from the conservation zone is purchased by some of the world’s largest cocoa traders, according to company and trade documents and AP interviews with more than 20 farmers, five licensed buying agents and two brokers all operating within the reserve.
They say those traders include Singapore-based food supplier Olam Group and Nigeria’s Starlink Global and Ideal Limited, the latter of which acknowledged using cocoa supplies from the forest.
A fewer number of those working in the forest also mentioned Tulip Cocoa Processing Ltd., a subsidiary of Dutch cocoa trader and producer Theobroma.
Those companies supply Nigerian cocoa to some of the world’s largest chocolate manufacturers including Mars Inc. and Ferrero, but because the chocolate supply chain is so complex and opaque, it’s not clear if cocoa from deforested parts of Omo Forest Reserve makes it into the sweets that they make, such as Snickers, M&Ms, Butterfinger and Nutella.
Mars and Ferrero list farming sources on their websites that are close to or overlap with the forest but do not provide specific locations.
Government officials, rangers and the growers themselves say cocoa plantations are spreading illegally into protected areas of the reserve.
Farmers say they move there because their cocoa trees in other parts of the West African country are aging and not producing as much.
“We know this is a forest reserve, but if you are hungry, you go to where there is food, and this is very fertile land,” Kumayon told the AP, acknowledging that he’s growing cocoa at an illegal plantation at the Eseke farming settlement, separated only by a muddy footpath from critical habitat for what UNESCO estimates is the remaining 100 elephants deep in the conservation zone.
Conservationists also point to the world’s increasing demand for chocolate.
The global cocoa and chocolate market is expected to grow from a value of $48 billion in 2022 to nearly $68 billion by 2029, according to analysts at Fortune Business Insights.
The chocolate supply chain has long been fraught with human rights abuses, exploitative labor and environmental damage, leading to lawsuits, U.S. trade complaints and court rulings.
In response, the chocolate industry has made wide-ranging pledges and campaigns to ensure they are sourcing cocoa that is traceable, sustainable and free of abuse.
Companies say they have adopted supply chain tracing from primary sources using GPS mapping and satellite technology as well as partnered with outside organizations and third-party auditors that certify farms’ compliance with sustainability standards.
But those working in the forest say checks that some companies rely on are not done, while one certifying agency, Rainforest Alliance, points to a lack of regulations and incomplete data and mapping in Nigeria.
AP followed a load of cocoa that farmers had harvested in the conservation zone to the warehouses of buying agents in the reserve and then delivered to an Olam facility outside the entrance of the forest.
Staffers at Olam’s and Tulip’s facilities just outside the reserve, who spoke on condition of anonymity because they’re not authorized to discuss their companies’ supplies, confirmed that they source cocoa from farmers in the conservation zone.
AP also photographed cocoa bags labeled with the names and logos of Olam and Tulip in farmers’ warehouses inside the conservation zone.
The Omo reserve consists of a highly protected conservation zone ringed by a larger, partially protected outer region.
Loggers, who are also a major source of deforestation, can get government licenses to chop down trees in the outer areas, but no licenses are given anywhere for cocoa farming.
Agriculture is banned from the conservation area, except for defined areas where up to 10 indigenous communities can farm for their own food.
Nigeria is one of Africa’s biggest oil suppliers and largest economy; after petroleum, one of its top exports is cocoa.
It’s the world’s fourth-largest producer of cocoa, accounting for more than 5% of global supply, according to the International Cocoa Organization.
Yet it’s far behind the world’s largest producers, Ivory Coast and Ghana, which together supply more than half of the world’s demand and are often singled out in companies’ sustainability programs.
According to World Bank trade data and Nigeria’s export council, more than 60% of Nigeria’s cocoa heads to Europe and about 8% to the United States and Canada.
It passes through many hands to get there: Farmers grow the cocoa beans, then brokers scout farms to buy them.
Licensed buying agents purchase the cocoa from brokers and sell it to big commodity trading companies like Olam and Tulip, which export it to chocolate makers.
In October, AP followed a blue- and white-striped van loaded with bags of cocoa beans along a road pitted with deep mud holes within the conservation zone to an Olam warehouse just outside the entrance of the forest.
At the warehouse, which Olam confirmed was theirs, AP photographed the cocoa being unloaded from the van, whose registration number matched the one filmed in the forest.
Farmer Rasaq Kolawole and licensed buying agent Muraina Nasir followed the van to sell the cocoa, and neither expressed misgivings about the deforestation.
“We are illegal occupants of the forest,” said farmer Kolawole, a college graduate and former salesperson.
AP also visited four cocoa warehouses in the forest belonging to licensed buying agents: Kadet Agro Allied Investments Ltd., Bolnif Agro-allied Farms Nigeria Ltd., Almatem and Askmana.
Managers or owners all told AP that they buy from farmers growing cocoa in protected areas of the forest and that they sell that cocoa to Olam.
Three of the warehouse managers told AP that they also sell to Tulip and Starlink.
“They do not differentiate between cocoa from local – that is farms outside the forest – and the reserve,” said Waheed Azeez, proprietor of Bolnif, describing how “big buyers like Olam, Tulip and Starlink” buy cocoa sourced from deforested lands.
“They buy everything, and most of the cocoa is from the reserve.”
Despite AP’s findings, Olam insists that it “forbids” members of its “Ore Agbe Ijebu” farmer group from “sourcing from protected areas and important natural ecosystems like forests.”
That Ijebu farmer group is listed as a sustainable supplier on Olam’s website and is said to be in Ijebu Ife, a community near the reserve.
“Any farmers found not complying with the code and illegally encroaching on forest boundaries are removed from our supply chain and expelled from the OAIJ farmer group,” the company said in a statement emailed to AP.
However, Askmana manager Sunday Awoke said, “Olam does not know the farmers.
We buy from the farmers and sell directly to Olam, and no assessment against deforestation takes place.”
Speaking to AP as a convoy of motorcycles brought bags of cocoa from the conservation area to his warehouse within the reserve, Awoke said he used to be a conservation worker who fought deforestation by farmers.
“But I am on the other side now.
I wish to go back, but survival first, and this pays more,” he said.
Others agreed.
“The place is not meant for cocoa farming, but elephants,” said Ewulola Bolarinwa, who is both a broker and a leader of those who farm at the Eseke settlement inside the conservation zone.
“We have a lot of big buyers who supply the companies in the West, including Olam, Tulip and many more.”
Ferrero, which makes Ferrero Rocher hazelnut balls, Nutella chocolate hazelnut spread and popular Baby Ruth, Butterfinger and Crunch candy bars, lists a farming group in a community near the forest as the source of its cocoa supplied by Olam, the Italian company says on its website.
McLean, Virginia-based Mars Inc., one of the world’s largest end users of cocoa with brands from Snickers to M&Ms, Dove, Twix and Milky Way, uses Nigerian cocoa from both Olam and Tulip, according to online company documents.
Ferrero, Mars and Tulip say they’re committed to their anti-deforestation policies, use GPS mapping of farms, and their suppliers are certified through independent standards.
Ferrero also says it relies on satellite monitoring to show that its “cocoa sourcing from Nigeria does not come from protected forest areas.”
Mars says its preliminary findings show that none of the farms it’s mapped overlap with the reserve.
Tulip’s managing director, Johan van der Merwe, said in an email that the company’s cocoa bags, which AP photographed in farmers’ warehouses inside the conservation zone, are reused and distributed widely so it’s possible they’re seen across Nigeria.
He also said “field operatives” complete digital questionnaires about sourcing with all farmers and suppliers.
On the ground, however, farmers and licensed buying agents who said they supply Tulip told AP that they were not required to complete any questionnaire before their cocoa is purchased.
“Though we know they depend on our cocoa, we don’t directly sell cocoa to the exporters like Olam and Tulip, middlemen do, and there are no questions about deforestation,” said farmer Saheed Arisekola, 43, also a college graduate who said he turned to farming because he could not get a job.
As farmers, brokers and buying agents say cocoa from the conservation area flows into Olam’s export supply, U.S. customs records show a slice of where it might be going.
Olam’s American arm, Olam Americas Inc., received 18,790 bags of Nigerian cocoa shipped by its Nigerian subsidiary, Outspan Nigeria Limited, between March and April 2022, according to trade data from ImportGenius.
Olam and Tulip are both licensed to trade Nigerian cocoa certified by the Rainforest Alliance.
However, Olam told AP that its license does not cover the Ijebu area, where it sources the cocoa it sends to Ferrero and is near Omo Forest Reserve.
Ferrero says Olam’s sustainability standard in the area is verified by a third-party body.
Farmers who told AP that their cocoa heads to Olam and Tulip said they are not Rainforest Alliance certified.
Tulip has only one farm with active certification in Nigeria, the nonprofit’s database shows.
The Rainforest Alliance says it certifies that farms operate with methods that prohibit deforestation and other anti-sustainability practices.
It says farmers must provide GPS coordinates and geographic boundaries for their plantations, which are checked against public forest maps and satellite data.
The Rainforest Alliance told AP that Nigeria has “unique forest regulation challenges,” including incomplete or outdated data and maps that can “lead to discrepancies when comparing forest data with real on-ground conditions.”
It said it is working to get updated data from Nigerian authorities and would decertify any farms found to be operating illegally in conservation areas following a review.
The organization also says companies it licenses can buy cocoa certified by other agencies or that isn’t certified at all.
Starlink Global and Ideal Limited – the Nigerian cocoa exporter that the farmers and buying agents said they sell to – doesn’t have its own farmland in the reserve, “only suppliers from there,” spokesman Sambo Abubakar told AP.
Starlink does not make sustainable sourcing claims on its website, but it supplies at least one company that does – New York-based General Cocoa Co., U.S. trade data shows.
Between March and April 2023, Starlink shipped 70 containers, each loading 4,000 bags of dried cocoa beans, to General Cocoa, according to ImportGenius trade data.
General Cocoa, which is owned by Paris-headquartered Sucden Group, supplies Mars, according to online company documents.
Jean-Baptiste Lescop, secretary general of Sucden Group, says the company manages risks to forest conservation by sourcing Rainforest Alliance cocoa, mapping farms and using satellite images but that it’s a “continuous process” because most farmers in Nigeria don’t have official land ownership documents.
Sucden investigates reports of problems and is working on a response to AP’s findings about Starlink, Lescop said.
The conservation zone, which spans about 650 square kilometers (250 square miles), is the only remaining vital rainforest in Nigeria’s southwest, conservation officials say.
Such forests help absorb carbon from the atmosphere and are crucial for Nigeria to meet its pledges under the Paris climate agreement.
Besides helping fight climate change, the forest is designated an Important Bird and Biodiversity Area by BirdLife International, with significant populations of at least 75 bird species.
“There are now more than 100 illegal settlements of cocoa farmers, who came from other states because the land here is very fertile,” said Emmanuel Olabode, a conservation manager who supervises the reserve’s rangers in the protected areas.
“But after some years, the land becomes unproductive.”
The farmers know this.
“We’ll then find another land somewhere else or go back to our original homes to start new businesses,” said Kaseem Olaniyi, who acknowledges that he farms illegally in the conservation zone after moving in 2014 from a neighboring state.
The government in Ogun state, which owns the forest, said in a statement to AP that the “menace of cocoa farming” in the reserve dates back decades and that “all the illegal farmers were forcefully evicted” in 2007 before they found their way back.
“Arrangements are in the pipeline to engage the services of the Nigerian Police Force and the military to evict them from the Forest Reserve,” the government statement said.
However, Omolola Odutola, spokeswoman for the federally controlled police, said they do not have records of such a plan.
The farmers have been ordered not to start new farms, and those who spoke with AP said they are complying.
But forest guards said new farms are sprouting up in remote areas that are difficult to detect.
Rangers – who work for the government’s conservation partner, the nonprofit Nigerian Conservation Foundation – and forest guards who are employed by the state government both told AP that lax government enforcement has made combating cocoa expansion a challenge.
They told AP that previous arrests have done little to stop the farmers from returning and that has led to a sense of futility when they encounter illegal farming.
The state government said it “has never compromised regulations” but acknowledged that farmers are in the forest despite its efforts.
Homes and other buildings at farming settlements visited by AP have been marked for removal, including warehouses like that of licensed buying agent Kadet, one of the biggest there.
Farmers’ homes lack running water and toilets, forcing women and children to collect water from narrow streams to use while the men work.
The removals have not taken place because officials make money from the cocoa business in the forest, according to farmers and buying agents, who lament the difficult living conditions, with mud roads filled with holes creating high transportation costs that eat away their already meager profits.
The state government declined to comment about making money from illegal cocoa farming in the forest.
The agents have formed a lobby group that has “rapport with government officials” to ensure farmers remain in the conservation zone despite threats to evict them, said Azeez, the owner of buying agent Bolnif who is also chairman of a committee that monitors risks against cocoa business in the forest.
The European Union, the largest destination of cocoa from West Africa, has enacted a new regulation on deforestation-free products that requires companies selling commodities like cocoa to prove they have not caused deforestation.
Big companies must ensure they’re following the rules by the end of 2024.
Experts at the Cocoa Research Institute of Nigeria are launching a “Trace Project” in six southern states – though it doesn’t include Ogun state where Omo Forest Reserve is located – to advance efforts against deforestation in cocoa production and ensure Nigeria’s cocoa is not rejected in Europe.
“From the preliminary data collected, major exporters are implicated in deforestation, and it is their responsibility to ensure compliance with standards,” said Rasheed Adedeji, who leads the institute’s research outreach.
But farmers say they’ll keep finding places to work.
“The world needs cocoa, and the government also gets taxes because the cocoa is exported,” said Olaniyi, one of the farmers.
___
Associated Press climate and environmental coverage receives support from several private foundations.
See more about AP’s climate initiative here.
The AP is solely responsible for all content.
Copyright © 2023
The Washington Times, LLC.
Please read our comment policy before commenting.
Click to Read More and View Comments Click to Hide
With CES 2024 just around the corner, a slew of product reveals is in store with the first set being LG’s upcoming soundbar lineup.
The 2024 soundbars as presented by LG in its press release hinge primarily on TV synergy and audio immersion using several features that make them stand out from the crowd: The two newest soundbar models in its lineup include the SG10T and S70TY, while the S95TR is an upgraded version of LG’s S95QR from 2023.
Among the features coming to next year’s lineup is a new AI Room Calibration feature that “rapidly analyzes the environment of a room and adjusts the settings” both for the front and rear surround speakers.
Using newly designed internals and a rich feature set that is expertly paired with specific LG TVs, these new soundbars could prove to make a world of difference in your everyday entertainment consumption — provided you have an LG TV to pair alongside them.
As already mentioned, LG’s new 2024 soundbar lineup consists of three main products: the S95TR, SG10T, and S70TY.
They’re listed in accordance with their general market, with the S95TR being LG’s flagship model.
Given its premium nature, the S95TR comes kitted with all the bells and whistles, including 15 channels, 810W of total power output, and a list of internal upgrades to set it apart from the previous S95QR iteration.
The size of the soundbar has been increased slightly to better compensate for these internal improvements.
The S95TR will use an enhanced passive radiator and a set of upgraded tweeters for impeccable audio quality and boosted bass.
Its five dedicated upward firing drivers will also make it quite a beast for those who love Dolby Atmos , but do note that this is a wired product as opposed to the other new LG offerings.
On that topic, the new LG SG10T is a wireless soundbar with some neat tricks up its sleeve.
The most obvious one is its ultra-thin design that’s intended to make wall mounting far easier and fit more cleanly with LG’s G series OLEDs, like the LG G3 OLED , which the soundbar specifically caters to.
In the more mid-range category is the S70TY, yet another wireless soundbar built specifically for LG’s QNED TV models.
It’s set to leverage LG’s up-firing channel speakers and is intended to aesthetically match with the LG QNED80, QNED85, QNED90, QNED95, and LG QNED99 TVs.
There’s little else LG announced in terms of exciting internals on the new soundbar models and pricing remains a question mark most likely until CES 2024.
The company did, however, highlight some key features that will make these models turn your TV viewing experience into a draw dropping spectacle.
So which of the features are specific to LG TVs?
Leveraging the WoWCast feature, soundbars — like the aforementioned SG10T and S70TY — can wirelessly send Dolby Atmos sound to other compatible devices, such as LG TVs and even alternative LG soundbars.
LG’s Wow Orchestra, a feature that allows both the TV and soundbar to work in tandem for perfected sound quality, is also being improved and included on newer LG soundbar models.
Finally, the newly introduced Triple Level Spatial Sound technology is also an exciting new addition on LG soundbars, gifting them, as the press release states, “lifelike sound and a compelling sense of space.”
The feature uses a 3D engine to analyze channels for the most optimal and immersive sounds possible.
The good news is that this one should work regardless of whether you own an LG TV or not.
Those in attendance at CES 2024 in January can get a special taste of LG’s new soundbars and the delectable audio enhancements they provide to the home cinema.
LG will be in the Central Hall of the LAs Vegas Convention Center at booth #15501.
Researchers have developed an augmented reality heads-up display that could improve road safety by displaying potential hazards as high-resolution three-dimensional holograms directly in a driver’s field of vision in real time.
Current heads-up display systems are limited to two-dimensional projections onto the windshield of a vehicle, but researchers from the Universities of Cambridge, Oxford and University College London (UCL) developed a system using 3D laser scanner and LiDAR data to create a fully 3D representation of London streets.
The system they developed can effectively “see” through objects to project holographic representations of road obstacles that are hidden from the driver’s field of view, aligned with the real object in both size and distance.
For example, a road sign blocked from view by a large truck would appear as a 3D hologram so that the driver knows exactly where the sign is and what information it displays.
The 3D holographic projection technology keeps the driver’s focus on the road instead of the windshield, and could improve road safety by projecting road obstacles and potential hazards in real time from any angle.
The results are reported in the journal Advanced Optical Materials .
Every day, around 16,000 people are killed in traffic accidents caused by human error.
Technology could be used to reduce this number and improve road safety, in part by providing information to drivers about potential hazards.
Currently, this is mostly done using heads-up displays, which can provide information such as current speed or driving directions.
“The idea behind a heads-up display is that it keeps the driver’s eyes up, because even a fraction of a second not looking at the road is enough time for a crash to happen,” said Jana Skirnewskaja from Cambridge’s Department of Engineering, the study’s first author.
“However, because these are two-dimensional images, projected onto a small area of the [windshield], the driver can be looking at the image, and not actually looking at the road ahead of them.”
For several years, Skirnewskaja and her colleagues have been working to develop alternatives to heads-up displays (HUDs) that could improve road safety by providing more accurate information to drivers while keeping their eyes on the road.
“We want to project information anywhere in the driver’s field of view, but in a way that isn’t overwhelming or distracting,” said Skirnewskaja.
“We don’t want to provide any information that isn’t directly related to the driving task at hand.”
The team developed an augmented reality holographic point cloud video projection system to display objects aligned with real-life objects in size and distance within the driver’s field of view.
The system combines data from a 3D holographic setup with LiDAR (light detection and ranging) data.
LiDAR uses a pulsed light source to illuminate an object and the reflected light pulses are then measured to calculate how far the object is from the light source.
The researchers tested the system by scanning Malet Street on the UCL campus in central London.
Information from the LiDAR point cloud was transformed into layered 3D holograms, consisting of as many as 400,000 data points.
The concept of projecting a 360° obstacle assessment for drivers stemmed from meticulous data processing, ensuring clear visibility of each object’s depth.
The researchers sped up the scanning process so that the holograms were generated and projected in real time.
Importantly, the scans can provide dynamic information, since busy streets change from one moment to the next.
“The data we collected can be shared and stored in the cloud, so that any drivers passing by would have access to it—it’s like a more sophisticated version of the navigation apps we use every day to provide real-time traffic information,” said Skirnewskaja.
“This way, the system is dynamic and can adapt to changing conditions, as hazards or obstacles move on or off the street.”
While more data collection from diverse locations enhances accuracy, the researchers say the unique contribution of their study lies in enabling a 360° view by judiciously choosing data points from single scans of specific objects, such as trucks or buildings, enabling a comprehensive assessment of road hazards.
“We can scan up to 400,000 data points for a single object, but obviously that is quite data-heavy and makes it more challenging to scan, extract and project data about that object in real time,” said Skirnewskaja.
“With as little as 100 data points , we can know what the object is and how big it is.
We need to get just enough information so that the driver knows what’s around them.”
Earlier this year, Skirnewskaja and her colleagues conducted a virtual demonstration with virtual reality headsets loaded with the LiDAR data of the system at the Science Museum in London.
User feedback from the sessions helped the researchers improve the system to make the design more inclusive and user-friendly.
For example, they have fine-tuned the system to reduce eye strain, and have accounted for visual impairments.
“We want a system that is accessible and inclusive, so that end users are comfortable with it,” said Skirnewskaja.
“If the system is a distraction, then it doesn’t work.
We want something that is useful to drivers, and improves safety for all road users, including pedestrians and cyclists.”
The researchers are currently collaborating with Google to develop the technology so that it can be tested in real cars.
They are hoping to carry out road tests, either on public or private roads, in 2024.
More information: Accelerated Augmented Reality Holographic 4K Video Projections Based on LiDAR Point Clouds for Automotive Head-Up Displays, Advanced Optical Materials (2023).
DOI: 10.1002/adom.202301772
The 8 Biggest AI Moments Of 2023 I feel confident that 2023 will be looked back on as a major turning point in the adoption of AI into society.
Largely thanks to the emergence of generative natural language interfaces, including ChatGPT, AI has become an increasingly omnipresent aspect of our everyday lives.
From what we know of earlier technological and industrial revolutions, the arrival of AI is as much about societal change as technological change.
The impact on how we work, build, socialize and play will be enormous, and the direction this will take us in is likely to be strongly influenced by actions taken today.
So, here’s what I believe are the most important events of the last 12 months – in terms of the implications they could have on the future of not just AI but our lives.
2023 was undoubtedly defined by ChatGPT mania.
Although we were blown away by the capabilities of the GPT-3.5 engine powering its initial release, rumors were swirling regarding the upcoming GPT-4 and how much more powerful and capable it would be.
When it arrived in March, few were disappointed – it was clearly far more advanced in its ability to converse and provide us with information.
Further updates throughout the year gave it the ability to search the web as well as view and create images.
Public awareness of the dangers of fake AI images and videos rose in 2023, thanks to the emergence of ever-more realistic deepfakes and scams.
One incident in particular threw a spotlight on the issue when images of Pope Francis wearing a ridiculously oversized puffer jacket appeared at the top of our social media feeds.
The image was noteworthy for its realism, and the response showed that it had clearly fooled many people.
For many, this could have been the first real indication of the technology’s potential for spreading manipulation and misinformation.
As examples of legislation being enacted in 2023, the European Union AI Act and China’s Interim Measures for the Management of Generative Artificial Intelligence (GAI) are potentially the most significant.
In June, the European Parliament adopted the act, which is designed to ensure the safe, reliable, and transparent use of AI in line with existing legislation, including the Human Rights Act and GDPR.
The GAI interim measures aim to promote the healthy use of AI that is in line with moral and ethical values.
Both contain measures aimed at addressing the issue of AI and intellectual property.
Some of the most profound implications for AI are in the field of bioscience and genetics, and in 2023, Google made a significant leap forward with its DeepMind-developed AlphaMissense model.
Google that its model was able to identify potentially dangerous mutations that can cause diseases like cystic fibrosis, sickle-cell anemia, hemophilia and cancer.
The breakthrough built on work previously done by the AlphaFold project and a catalog of mutation data was published with the aim of helping researchers fight these diseases.
We’ve never seen a business go from virtual obscurity to darlings of the tech world as quickly as has happened with ChatGPT owner OpenAI, so it’s probably natural that its organizational integrity experienced some stress.
CEO and current Silicon Valley
Golden Boy Sam Altman was briefly fired before being reinstated due to reasons that still aren’t exactly clear but seem to be down to board-level politics and rivalry.
Why is it significant?
Well, we’ve never seen a superstar CEO like Altman removed and then returned to power- due to popular demand– so quickly.
The event has led to renewed calls for greater regulation and scrutiny into the ways that leading AI companies are run, which could influence debate on the subject in the future.
Public awareness is a critical part of the puzzle regarding managing AI‘s integration into society.
The Pause Giant AI Experiments may not yet have achieved its apparent primary aim.
Thanks to the publicity generated because signers include Elon Musk, Yoshua Bengoi and Stuart Russel, they captured headlines and generated awareness.
By the end of the year, more than 33,700 signatures had been gathered, and although research on giant AI is ongoing, I believe many of us have a better understanding of what’s at stake as we head into 2024.
A new version of Microsoft’s Bing, powered by ChatGPT, appeared early in 2023, marking the first time many people would see natural language integrated into a search engine.
The move was not surprising as Microsoft’s multi-billion-dollar investments in ChatGPT’s creator, OpenAI, were well publicized.
However, it marked the start of a rush to integrate large language models capable of understanding and responding via human language into many tools and apps.
This trend is likely to be at the vanguard of AI’s push to be integrated and accepted into society in the next few years.
Later, in 2023, Microsoft began rolling out generative AI functionality branded as Co-Pilot across other products and services tied to its Windows operating system and 365 Office platform.
A first-of-its-kind international agreement was made at the AI UK 2023 conference between 28 countries, including the US, EU, UK and China.
The aims to put in place a framework for identifying risks and putting guardrails in place around the super-powerful AI that we can expect to see emerging in the near future.
Some have said the events and its outcomes were shrouded by political aims and overlooked many concerns including impact on women and questions around intellectual property rights and AI.
But it can’t be denied that the issue of safe and ethical advanced AI is critical and will require international cooperation at the highest levels.
It’s less than a week until Christmas Day, so you can’t put it off any longer.
If you’re gifting this season, you need to get your act together pronto.
Gadgets and tech make for great gifts: if the person you’re choosing for likes an electric toothbrush, they’ll think of you every morning.
Below, I’ve gathered together 15 of the best items for every budget and a variety of recipients.
Some will be brands you know, others will I hope be new to you.
All have been put through their paces and reviewed thoroughly so they come with a tried-and-tested recommendation.
Philips Sonicare DiamondClean Prestige 9900
Philips Sonicare DiamondClean Prestige 9900 $249.96 from , £223.52 from The Sonicare range is unbeatably good, offering excellent brushing to ensure optimum dental health, with multiple settings and a companion app that can tell you how well you’re doing as you go, warning you if you press too hard or miss a bit.
The brush head is bigger than on some rivals and has angled bristles to improve plaque removal.
It’s a sonic toothbrush, which means it vibrates so fast plaque is dislodge more easily.
There’s a charging base where the toothbrush stands and a travel case which charges by USB.
This is great for the holidays, though the battery tends to last more than a week anyway.
This is an expensive toothbrush, but look around and you can find it significantly cheaper, as in the Amazon prices above which are respectively 34% and 59% off.
Amazon Kindle Paperwhite Amazon Kindle Paperwhite $139.99 from , £149.99 from Give the gift of reading with Amazon’s best-ever Kindle.
The Paperwhite now comes with a bigger display, 6.8 inches, narrow borders around the edges and a flush-front design.
Kindles are front-lit, so unlike a regular tablet which is backlit, these are much more restful on the eyes.
The Paperwhite, which is waterproof so you can read in the bath with peace of mind, has a light which you can adjust in brightness and color, from white to amber.
This also contributes to the restfulness of the experience.
The price includes lockscreen ads which are not very intrusive, but if you don’t like them, you can pay an extra $20 to banish them.
Note that the U.K. price is higher because it’s for the 16GB capacity version, while the U.S. price is for 8GB.
Unless you plan on holding multiple Audible audiobooks on your device, 8GB is enough.
Nomad 65W
Slim Power Adapter Nomad
65W Slim Power Adapter $65 from Travel adapters are tricky things: you always need several and they often have the wrong connectors on board.
The new slim adapter, has two USB-C sockets in it so you can plug two devices in at the same time, and it’s powerful enough to handle a laptop and a phone at the same time.
It’s designed with technology which makes it small and slim, so it can be squirreled away in the smallest carry-on.
If you don’t need multiple sockets, there’s a 35W version which has one output socket on board, which costs $35.
Nomad’s trademark immaculate build quality is on display here in both cases.
OOFOS OOMG Sport LS Shoe OOFOS OOMG Sport LS Shoe $139.95 from , £129.95 from This health option concerns recovery.
If you’ve been running and have overdone it, or you suddenly suffer from the annoying and painful plantar fasciitis, then you need recovery footwear.
The incredibly soft foam that OOFOS owns is different from the bouncy effect you find in performance shoes—this is to help you get better.
It works brilliantly and is amazingly comfortable to wear.
There are different styles from clogs to slippers to mules to trainers.
The website has a big range and if the person you’re gifting is in need, their feet will thank you.
OnePlus Open Smartphone OnePlus Open $1,699.99 from , £1,599 from Phones that open out into smallish tablets are all the rage these days and the OnePlus Open looks great, especially in its sultry green finish, and works like a dream.
It feels sturdy, which is not something you can say about all foldables, and it is thinner than some rivals.
The OLED panels look great, inside and out, with a matte screen protector helpfully minimizing reflections.
Some folders neglect the camera, but the rear panel here with three snappers is enormous and offers great performance.
The main 48-megapixel camera is especially good and the company’s liaison with Hasselblad continues to pay dividends.
The other cameras are a 48-megapixel ultrawide and a 64-megapixel periscope zoom lens.
Highly enjoyable.
Sonos Era 300 Smart Speaker Sonos Era 300 Sonos pioneered great-sounding wireless speakers that were superbly easy to set up.
The latest model, the Era 300, is the most versatile and best yet.
Its unique shape gives you a clue that it’s designed to project audio in every direction, and it delivers sumptuous, room-filling sound.
It’s designed to work with spatial audio formats.
And, unlike most of Sonos’s previous devices, this one includes Bluetooth as well as wi-fi.
One of the joys of Sonos is the way you can add more speakers when budget allows.
Easy set-up remains as key as ever and this is sublimely simple to get going.
And using it reminds you how good your music can sound.
There are plenty of other Sonos models worth a look, but this is strong on value and outstanding for audio quality.
Apple Watch Ultra 2 Apple Watch Ultra 2 $799 from , £799 from You need to get your skates on for this one if you’re living in the U.S.
Thanks to an intellectual property disagreement which has gone against Apple, the company is pausing sales online on Thursday, December 21, and instore sales on Sunday, December 25.
Other countries are unaffected, and you should be able to buy it from other resellers in the U.S. after these cut-off dates.
The Watch Ultra 2 is brilliant—literally, as it has a 3,000 nits display, the brightest yet on an Apple Watch.
It looks great and works wonderfully well.
There’s an extra button compared to other Apple smartwatches, which can be configured as you like.
It’s perfect to start workouts, for instance.
The Ultra 2 also has the new double-tap mechanic, where you pinch your index finger and thumb together to pause a timer, snooze and alarm or otherwise interact with the watch.
Battery life is two days with ease.
Bose QuietComfort Ultra Headphones Bose QuietComfort Ultra $429 from , £449.95 from These cans are not cheap (though if you’re quick you can snag $50 or £50 discounts) but they are tremendous.
Bose is known for its class-leading noise-cancellation and that’s better than ever here, promising a quieter environment when it’s turned on.
As well as quiet mode there’s an effective aware mode, and a Bose specialty: immersion mode which works in conjunction with Bose immersive audio, which works better with some tracks than others.
Across the board, though, audio quality is tremendous.
Superbly comfortable, even for extended wear.
Belkin AirPods Cleaning Kit Belkin AirPods Cleaning Kit $14.99 from , £14.99 from This is a great stocking-filler or other small gift for anyone with AirPods.
It may not be the pleasantest of subjects, but in-ear headphones can see build-up of earwax on them (I mean, the good news is at least it’s not in your ears any more, right?).
It’s designed for regular AirPods rather than Pro or Max, but works with all three generations of AirPods.
There’s a brush, a dispenser with wax softener in it, cleaning gel and microfiber cloth.
You drip the softener into the areas you want to clean, wait for a minute, tip it out and gently scrub with the brush.
After that, the cleaning gel satisfyingly picks up any debris.
It’s sold as a single use package
but I reckon you could clean a couple of pairs at least.
Therabody TheraFace Mask Therabody Theraface Mask $599 from , £549 from LED light therapy is used to stimulate the skin’s collagen and elastin production to make your skin look younger, which is probably something everybody wants.
TheraFace is the mask from Therabody, the brilliant company that makes outstanding massagers, among other things.
It has almost 650 LEDs which offer three different settings: red, blue or red plus infra-red.
These variously treat fine lines and wrinkles (red), treat breakouts and blemishes (blue) and tone the skin (red plus infra-red).
There are plenty of different sessions.
The mask also has vibration therapy, which is relaxing and enjoyable.
Tremendously good.
Allbirds Runner-Up Mizzles Allbirds Wool Runner-up Mizzles in Thunder Green $145 from , £135 from Winter wear has never been more comfortable.
The exceptional Allbirds shoes are smart because they’re made from remarkable materials.
They use a technologically advanced design, and have unique foam soles (made from sugar, though don’t try snacking on them).
The shoelaces are made from recycled plastic bottles and there’s recycled nylon in some products and TrinoXO in others, which contains chitosan, made from crab shells.
The insole is made from merino wool and castor bean oil.
Wear them and you’ll feel like you’re walking on clouds.
The runner-ups cover more of your ankle, keeping them warm in the winter.
And the Mizzles refer to their resistance to mist and drizzle, keeping you dry, too.
Ugreen MagSafe 10,000mAh
Powerbank Ugreen MagSafe 10,000mAh powerbank $69.99 from , £49.99 from There are plenty of battery packs on the market.
What sets this one apart is it is compact and light but still manages to pack in 10,000mAh of energy.
That’s enough for two charges of an iPhone 15, or one-and-a-half times an iPhone 15 Pro Max.
It’ll work with other phones, too, but is best for an iPhone with MagSafe as it snaps in place.
There’s also a kickstand, so you can use this to connect an iPhone 15 Pro, for instance, in landscape orientation so that it can display a clock at night, among other features.
What’s more, it can charge up to three devices at the same time, though only one wirelessly, obviously.
Anker 765 USB-C to USB-C Cable Anker 765 USB-C to USB-C cable 3ft $34.99 from , £22.99 from 6ft $34.99 from , £25.99 from I know, a cable is about as sexy as a handkerchief or pair of socks, but here’s why you should consider this: this year, Apple’s iPhone moved to a USB-C connector instead of the previous Lightning socket.
So, plenty of people who until now had more cables than they knew what to do with suddenly need a spare cable for the car, the bedroom or the travel pack in the suitcase.
Anker’s cables are brilliant, offering great build quality and attractive design.
They’re powerful enough to allow up to 140W output so you can be sure your gadget is charging fast.
Nomad Base One Max Nomad Base One Max $150 from When Apple introduced its ring of MagSafe magnets on the back of the iPhone to ensure a secure and strong connection between charger and phone, it also created charging pads, but so light that when you lifted the iPhone, the pad would stay attached.
However, Nomad’s handsome, well-built charger is heavy enough to stay put, however quickly you grab your phone.
A rubber base means it doesn’t slip and you can choose between a dark carbide or eye-catching silver.
If you have an Apple Watch as well, then the Base One Max has a charging pad for your smart timepiece, too, plus a dip into which AirPods Pro fit.
There’s also a version, the Base One, that’s for iPhones only.
Nomad does not supply a charging plug, believing that many of us have more power adapters than we know what to do with.
Note that this needs a 30W adapter as a minimum.
Philips Azur Elite Iron Philips Azur Elite £119.99 from Now, look, you need to be careful who you give an iron to: it could suggest you expect them to smooth the wrinkles out of all your clothes.
But if that special recipient has hinted that’s what they want, there’s no better choice than the Philips Azur Elite.
Philips makes sensational steam irons and the Azur Elite is the top of the range.
It includes something called OptimalTEMP technology which basically means you never have to set the temperature of the iron, it does it automatically, with no fear of you burning or scorching the fabric, whatever it is.
It also claims that the steam control is also intelligent, ensuring just the right amount of steam is released.
It heats up fast and has a steam boost to get rid of creases.
Hard to beat.
Note, in the U.S., Philips has a different range of irons but are consistently well-designed and effective.
Happy holidays!
If you’ve recently updated your Windows 11 PC to the very latest version you might suddenly find you start having issues with your Wi-Fi.
The gremlins seem to have begun after users installed the most recent cumulative update called KB50532288 and KB5033375 which were pushed out during Microsoft’s most recent Patch Tuesday.
Once downloaded onto PCs, some have reported Wi-Fi connectivity issues especially when trying to join public networks.
Microsoft has now confirmed that it is looking into the bug which doesn’t affect any versions of Windows 10 .
“Microsoft has received reports of an issue in which some Wi-Fi adapters might not connect to some networks after installing KB5032288,” the Redmond firm said in an update .
“We have confirmed this issue was caused by KB5032288 and KB5033375.
As reported, you are more likely to be affected by this issue if you are attempting to connect to an enterprise, education, or public Wi-Fi network using 802.1x authentication.
This issue is not likely to occur on home networks.”
If your PC is having issues then Microsoft is recommending rolling back to its previous version of Windows 11 until it releases an official fix.
The company also wants affected users to submit data using the Feedback Hub.
To uninstall this update just search “Windows Update” from the Windows search bar.
Then you can navigate to “Update History” and tap “Uninstall Updates”.
You should then see KB503375 which can then be easily removed.
Hopefully, Microsoft will release a full update soon which will iron out the bug for good.
If you’re still sticking with Windows 10 then don’t forget there’s limited time left before this software loses full support from Microsoft.
That date is set for 2025 and means users will no longer get new features or security patches.
Microsoft has recently confirmed that it will offer updates past the 2025 deadline although Windows 10 users will need to pay extra to access them.
Phones can be a thoughtful gift around the holidays, whether you’re wrapping up a present for a relative who’s a few generations behind the latest smartphone or one of your kids champing at the bit for the latest and greatest technology.
For many, a phone will be an exciting and welcome present.
At the same time, it’s also a highly personal item that comes with a lot of personal preferences.
If you’re trying to track down a phone as a present, you’ve got to contend with more than just finding the best cell phone deals .
There are other considerations, both when you’re shopping for that phone and as you’re getting ready to give that gift.
Here are the seven things to consider when giving a phone as a holiday gift.
The first thing to ask yourself is how much you want to spend on a gifted phone and how you want to pay for that purchase — and how both those considerations balance with the needs of your gift recipient.
You probably know that there are plenty of handsets to choose from at a wide range of prices.
But you’re also going to need to consider whether to pay for the phone up front or through monthly installments.
The latter is almost always an option when buying a phone from wireless carriers, and a few retailers offer payment plans, too.
But be sure to be clear on who’s going to be be paying off that phone.
“What the gift giver should absolutely not do is saddle their giftee with monthly phone payments that they did not ask for and may not be prepared to handle,” said Lauren Hannula, a tech expert with wireless comparison site WhistleOut.
“If you can’t buy the phone outright or cover any installment payments yourself, find a different yet equally thoughtful gift.”
(Note: a practical alternative to buying a brand-new phone, especially for your kid or teen, is gifting your current device as a hand-me-down and upgrading your own handset.)
One factor that may help you with budgeting for your smartphone gift is knowing which phone carrier your recipient uses.
If you’re on a family cell phone plan — purchasing that gift phone for a partner or child, for example — this is an easy decision, and you may be able to get a better deal if you trade in an old device or buy multiple new phones at once.
Other considerations including pricing at the phone carrier — both for the phone and the cell phone plan — as well as whether the recipient has other wireless devices, such as an Apple Watch, on their current plan.
If you aren’t sure, or if you want maximum flexibility, you can buy the device unlocked.
Be sure to check the return and exchange policies before buying a phone as a gift, just in case the recipient changes their mind.
Your recipient most likely already has a phone and is comfortable with a specific operating system — Android or iOS are the two most common options these days.
You really shouldn’t choose a device that’s different than what they have, unless they’ve explicitly stated a desire to switch platforms.
“Stick with what they already have, but just upgrade them,” Hannula said.
When considering your budget for a gift phone, you should also take into account an insurance or device protection plan, which may be offered by the carrier, manufacturer or both.
These plans allow for free or discounted phone repair or replacement in the event of a lost or damaged device.
(Be aware that there are restrictions, so you should read any terms and conditions ahead of time.)
If your gift recipient has a tendency to drop their phone or leave it behind, insurance may be well worth the extra expense for a device that can cost as much as $1,000 for some of the leading flagship hones.
Note that some protection plans are billed monthly, while others can be paid in full up front depending on how you buy the device.
If you’re gifting a phone, make sure you’re not transferring this expense to the recipient without their knowledge.
Some carriers and manufacturers also allow you to enroll in a protection plan after purchase within a certain timeframe if that’s a decision you want to leave up to your gift recipient.
Knowing how the recipient uses their phone will help you determine what features your gifted device should come with.
For example, if they love mobile gaming, look for a large, bright display with a high refresh rate and extended battery life like you’d find on the best gaming phones .
Photographers and videographers will appreciate one of the best camera phones along with plenty of storage for storing all those images and video files.
Users who are very active—runners, for example—and minimalists who carry their phones in their pockets may appreciate a smaller, low-profile device they can take on the go.
You should also consider accessibility, such as display size, speech-to-text, screen reading and accessory compatibility, for users who need these features.
You could set up a gifted phone in advance so it’s ready to use the moment it’s opened, though many recipients will prefer to do this themselves.
This allows them to transfer their contacts, apps and other information from their old phone, which you likely cannot do without their knowledge.
“Plus, it’s nice to unwrap a brand-new phone that hasn’t yet been meddled with,” Hannula added.
A good compromise is to offer to help the recipient with setup as needed once they’ve opened their gift.
Phone accessories can make great low-stakes gifts — with or without a new device.
If you know the recipient’s style and habits, you could include a case — we’ve got our picks for a variety of current models, including the best Google Pixel 8 cases and the best iPhone 15 cases , just to name a few.
Screen protectors, wireless chargers, car mounts and desk stands are also useful add-ons for new phones.
Just be sure to get a gift receipt in case the recipient wants to exchange your selection.
A brand-new phone can be a welcome and exciting holiday gift for someone you love as long as you take their needs and preferences into account, since they’ll be the ones using the device for years to come.
Pharmacy group Rite Aid was ordered Tuesday to stop using facial recognition for the next five years by a US regulator, which said the company falsely identified consumers as shoplifters using the technology.
The case touches on one of the main concerns about the proliferation of artificial intelligence, and facial recognition in particular, which is deemed to potentially misidentify or discriminate against individuals, especially non-whites and women.
“Rite Aid’s reckless use of facial surveillance systems left its customers facing humiliation and other harms,” said Samuel Levine, director of consumer harms at the Federal Trade Commission.
The FTC said that from 2012 to 2020, Rite Aid deployed facial recognition technology to spot repeated shoplifting offenders and other problematic behavior.
But the technology “falsely flagged…
consumers as matching someone who had previously been identified as a shoplifter or other troublemaker.”
The pharmacy group, which is currently in bankruptcy proceedings, also failed to properly train employees about the fact that there could be false positives with the technology or to prevent the use of low-quality images.
In addition to the ban, the FTC ordered the group and any other company involved to delete all data connected to its program.
Rite-Aid said it was “pleased to reach an agreement with the FTC and put this matter behind us.”
© 2023 AFP
Amazon has surprisingly slashed the prices of several of its Alexa products right before Christmas, meaning now is a great time to invest in the company’s smart speakers.
The newest addition to the Echo line up, the Echo Pop, only came out in May but Amazon is currently selling it for £19.99 , a huge saving on its usual £44.99 RRP.
The Echo Pop is a compact smart speaker that connects to your home Wi-Fi and reacts to voice commands, just as all smart speakers do.
It can play the radio, set alarms, tell you the weather, read the headlines, and perform many other useful tasks for you throughout the day.
It connects to Spotify, Apple Music or Amazon Music (naturally) as well as BBC Sounds, or you can use it as a Bluetooth speaker and control the music with your phone rather than via voice commands.
If you want, you can also link the Pop up to your Amazon shopping account and ask it to re-order something you might be running low on.
And if you pick one up in time for Christmas Day, you can ask Alexa to run a few trivia rounds to keep your guests entertained between dinner courses.
As part of Amazon’s sale, for just £5 extra you can also buy an Echo Pop with a Meross smart plug or with a Phillips Hue bulb .
The former is a plug that fits into your wall socket and acts as a way to set up and use smart home devices, such as the aforementioned Hue bulb – in this case, you need the plug in order to ask Alexa via the Echo Pop to turn on the lights, or perform other actions with other smart devices in your home.
The Echo Pop looks like a pretty useful gadget even at full price, so grabbing it for under £20 during this sale seems a very good deal to us.
It’s not just the Echo Pop that’s on sale – Amazon has also cut the prices of the Echo Dot and Echo Dot with clock, Ring smart doorbells, Kindle e-readers, and the latest Fire TV Stick 4K .
You can check out all of Amazon’s tech on sale here .
We’re not sure how long the deals are on for, but it’s certainly unusual to see such good prices before Christmas, rather than straight after.
What just happened?
A massive, six-month police operation that saw 34 nations cooperating, with financial aid from South Korea, has resulted in the arrest of almost 3,500 alleged cybercriminals for a wide range of crimes.
Authorities have also seized $300 million in cash and digital assets.
International police organization Interpol revealed the transcontinental operation against online financial crime this week.
Dubbed Operation HAECHI IV, the op targeted seven of the most common online scams: voice phishing, romance, sextortion, investment fraud, money laundering associated with illegal online gambling, business email compromise fraud, and e-commerce fraud.
Three-quarters of the crimes investigated involved business email compromise, e-commerce fraud, and investment fraud.
As part of the operation, authorities blocked 82,112 suspicious bank accounts, seizing a combined $199 million in hard currency and $101 million in virtual assets.
Officers also identified 367 virtual asset accounts linked to transnational organized crime.
One of the scams used by the criminals was the popular NFT “rug-pull” scheme in which investors are promised huge returns if they invest in the digital tokens, only for the operators to disappear with the money, leaving the buyers with worthless digital images they paid thousands for.
In March last year, two men aged 20 years old at the time were arrested after they made $1.1 million from an NFT “rug-pull” scam.
Interpol also warned about the new threat of AI and deep fake technology being used to trick people into believing they are talking to friends or family members, either over the phone or on video calls.
The voice-based version of the scam has been used to steal thousands of dollars from victims in the US and Canada this year.
In June, the FBI warned that criminals are harvesting images from social media sites, using them to create sexually explicit deepfakes of the victims, then blackmailing the targets by threatening to send the pictures and videos to friends and family.
Interpol highlighted how the operation involved cooperation between Filipino and Korean authorities that led to the arrest in Manila of a high-profile online gambling criminal after a two-year manhunt by Korea’s National Police Agency.
“The seizure of USD 300 million represents a staggering sum and clearly illustrates the incentive behind today’s explosive growth of transnational organized crime.
This represents the savings and hard-earned cash of victims.
This vast accumulation of unlawful wealth is a serious threat to global security and weakens the economic stability of nations worldwide,” Interpol’s Executive Director of Police Services, Stephen Kavanagh, said.
Microsoft now lets you create music from a simple text prompt to its Copilot AI .
The partnership with Suno includes creating lyrics and a backing track in any style.
The musical addition to Copilot is available through plug-ins and is being gradually rolled out to all users over the coming weeks.
So if you don’t see it today, try again another day.
Unlike other music creation tools, like the Google MusicFX experiment I tried last week, Suno can generate a complete song with music and lyrics from a simple text prompt.
Copilot users can get access to Suno by going to the Microsoft Copilot website, logging in with a personal Microsoft account and enabling the Suno plug-in.
The plug-in menu should appear in the top right of the screen, just look for the logo that says Make music with Suno.
Once enabled its just a case of letting your imagination run wild.
Enter a text prompt in much the same way you would for any other interaction with Copilot and then wait for it to make the magic happen.
It will send your request out to the Suno app and come back with a song title, lyrics and a play button where you can listen to the creation.
There is still a degree of artificiality about the vocals.
You can hear the digital effects underlying the voice.
But synthetic voice technology is improving all the time and that likely won’t be an issue in future versions.
The music isn’t as good as that generated by MusicFX, as in it feels more like a generated backing track than a crafted piece of art, but it does provide a complete song and you can remix, adapt the lyrics and even change the style completely.
If you don’t have access to Suno through Copilot yet and don’t want to wait you can access it by going to the Suno website.
You get 50 free credits when you sign-up but it takes 10 credits per song generation or remix — so you may want to wait for Copilot which will be free.
Microsoft said in a blog post: “We believe that this partnership will open new horizons for creativity and fun, making music creation accessible to everyone.”
This is a sentiment likely shared by other big tech companies and startups.
Google is investing heavily in generative music and it goes beyond the MusicFX experiment.
The company has launched DreamTrack in YouTube which lets you create backing music for Shorts in the style of real-life artists like John Legend and Charlie Puth.
Meta is working on a range of AI-music experiments and research projects.
If these follow the same path as Meta’s image and video projects they could soon be integrated into Facebook, WhatsApp and Instagram.
The issue is how to balance the ability of AI to create a song from nothing, with the right of artists and musicians to have their talent and creativity protected.
Suno, the tool that powers music creation in Copilot, won’t let you create a song in the style of an existing artist.
You can’t just say “make a Christmas song that sounds like Mariah Carey or Michael Buble”.
You can say make it in the style of a diva or crooner.
I think over time, like we’re seeing with generative image and video technology, the music industry will adapt and even adopt generative AI music tools as a way to enhance or speed up the creation process.
A quick “try it out” before recording for real.
It’s less than a week until Christmas and now is a good time to make sure your Wi-Fi is ready for the festive rush.
With millions of us staying at home, the broadband is bound to get a battering as we all try and stream movies, download games to play on consoles, test out new gadgets and get the Christmas playlist booming out of every room.
The last thing anyone wants is their Wi-Fi playing up and there is a simple thing to do this week that should help to keep things up to speed.
Virgin Media has recently issued some useful advice that can make sure things stay up to scratch.
The best part is, it only takes a few minutes to perform and you don’t need any technology know-how.
Sky confirms changes to your TV plan next month as free service shuts down All you need to do is reach around the back of your router and turn this flashing black box off via the power button.
Leave it for around a minute then turn it back on and wait for the reboot to take place (this usually takes around 4-5 minutes so be patient).
This easy advice not only clears any issues that might be clogging up the Wi-Fi but also makes sure your gadgets are connected to the best and strongest channel.
“Wi-Fi modems remain static on a single channel setting, which can become congested,” Virgin explained.
“If another gadget nearby is also using one of the same channels (for example, your neighbour’s router), the two devices could be competing for airtime.
You can solve this by turning your Hub off and on again.
When you reboot, the Hub’s channel switching mode instantly kicks in.
It will automatically pick up the best channel to operate on.”
Millions of BT and Sky users missing out on huge broadband boost – check your postcode Along with turning things off, there are plenty of other things you can try to make sure Christmas isn’t ruined by dropouts and dire speeds.
Firstly, make sure you check the position of the router and move it away from things that could cause interference such as wireless speakers, cordless phones and baby monitors.
Christmas trees are also bad for signals as they are covered in glossy baubles and metal tinsel so don’t place your router under the branches.
Other things to try include raising the box off of the floor and keeping it out of cupboards.
Fish tanks, microwave ovens and mirrors can also wreak havoc with the broadband so keep these objects clear for better speeds.
All of these tips work no matter who your Internet Service Provider (ISP) is so give them a try.
Microsoft’s burgeoning relationship with OpenAI is piling scrutiny on the tech giant’s market power and ways it is building and yielding that power in the lucrative artificial intelligence (AI) space.
Microsoft successfully skirted some of the spotlight on the market power of other technology giants, such as Apple and Google, over the past few years.
As artificial intelligence (AI) becomes a target for global regulators and lawmakers, Microsoft’s growing partnership with the leading AI firm and creator of ChatGPT is bringing renewed attention.
Federal Trade Commission (FTC) is examining the nature of Microsoft’s investment in OpenAI, .
The U.K. Competition Market Authority (CMA) also launched an to see if Microsoft’s growing relationship with OpenAI “resulted in a relevant merger situation,” and if that change would lessen competition.
Microsoft invested billions of dollars in OpenAI and has incorporated ChatGPT into Microsoft services.
The lines between the two companies, however, have been blurred in the aftermath of the tumultuous firing and rehiring of OpenAI co-founder and CEO Sam Altman.
“The relationship between Microsoft and OpenAI existed prior to that.
But that debate, that very public dispute, showed really the extent of Microsoft’s control over OpenAI,” said Lee Hepner, legal counsel at the American Economic Liberties Project, a nonprofit that supports aggressive antitrust enforcement.
The OpenAI board announced Altman was ousted as CEO on Fri.
Nov. 17.
By the following Monday, Microsoft announced they would be hiring Altman to lead an AI research team to the AI startup just a few days later — with the blessing of Microsoft CEO Satya Nadella — after hundreds of OpenAI employees threatened to quit if he was not reinstated.
One week later, OpenAI announced that Microsoft would have a on its board of directors.
“The question on everyone’s minds here is ‘Who absolutely controls OpenAI?’”
Hepner said.
“On the one hand, you have this perception of a nonprofit board that is controlling OpenAI and making critical decisions about the future of development of AI, generally.
On the other hand the public tussle over the disposition of Sam Altman and his role at OpenAI, I think, made very clear that actually Microsoft controls OpenAI,” Hepner added.
Microsoft has pushed back on accusations that it controls OpenAI.
After U.K. regulators asked for comment on the Microsoft-OpenAI partnership — the first step the launch of a formal investigation — Microsoft President Brad Smith said it has “fostered more AI innovation and competition, while preserving independence for both companies.”
Smith said the “only thing that has changed is that Microsoft will now have a non-voting observer on OpenAI’s board.”
He also said the situation is “very different from an acquisition such as Google’s purchase of DeepMind in the UK,” seemingly taking a shot at rival Google over its 2014 acquisition of an AI company.
Sam Weinstein, a former Justice Department antitrust attorney who is now a professor at Cardozo School of Law professor, said the change in having a non-voting board observer “potentially raises more red flags” for antitrust enforcers.
“Instead of what’s been characterized by Microsoft as this kind of economic interest where they have no actual control … now you have a board observer.
Of course, they don’t have a vote, but they’re in the room.
So do they exert control there?
We won’t know, we’re outside the room, but it’s a question,” he said.
The situation also draws more questions about how information is flowing between the companies, he said.
“The board observer kind of raises the stakes a little bit and catches the eye of the enforcers,” Weinstein added, but acknowleged the situation falls in a legal “gray area.”
The Clayton Act prohibits a board member from serving on the board of management of a competing corporation, but a non-voting observer from Microsoft may not violate that law, he said.
“One way you could look at this is Microsoft’s exerting some kind of control over OpenAI, it’s kind of hard to say how much, in a way that at least seems like it doesn’t implicate the antitrust laws,” Weinstein said.
“They’re sort of dancing around the scope of what the enforcement agencies can do by having this unusual arrangement,” he added.
Microsoft has also sought to tamp down concerns about its influence on OpenAI.
“While details of our agreement remain confidential, it is important to note that Microsoft does not own any portion of OpenAI and is simply entitled to share of profit distributions,” said Microsoft chief communication officer Frank Shaw after in statement following the CMA’s invitation for comment.
The FTC declined to comment on whether it was probing the Microsoft-OpenAI relationship, but a source familiar with the situation told The Hill that the agency has not reached out to Microsoft about the matter.
FTC Chair Lina Khan, a vocal critic of tech giants’ market power before assuming her role at the agency, has focused the commission on the potential competition concerns raised by AI.
In a June memo, the agency pledged to use its “full range of tools to identify and address unfair methods of competition.
The FTC voted unanimously last month to authorize the use of compulsory process, such as subpoenas, in nonpublic investigations involving AI-related products and services.
The FTC is currently controlled by three Democratic commissioners after of two Republican colleagues earlier this year and last.
The agency said the update “will streamline” the agency staff’s ability to issue civil investigative demands.
“There’s an obvious effort at the Federal Trade Commission to move quickly here.
And I think that that’s what is most important.
The outcome will be a result of the investigation, but it’s critical that the Federal Trade Commission move quickly before this runaway train gets too far down the track,” Hepner said.
The American Economic Liberties Project and other tech advocacy groups including Demand Progress and the Center for Digital Democracy sent a letter Wednesday to Khan and Jonathan Kanter, assistant attorney general and head of Justice Department antitrust division, urging their offices to investigate “Big Tech’s concentration in the AI space.”
The letter specifically calls out changes between Microsoft and OpenAI’s relationship, including the restructuring that secured Microsoft a non-voting board position.
The groups said the change “indicates a greater level of access and control for Microsoft over the management and future trajectory of OpenAI,” according to a copy of the letter.
The rise of generative AI comes heels of the federal government’s ramped up scrutiny of the market power of tech giants — an effort that spawned a and lawsuits filed by antitrust enforcers by both the Trump and Biden administrations.
While Microsoft avoided much of the recent push to rein in Big Tech, the company is no stranger to antitrust scrutiny.
The federal government sued Microsoft in 1998, alleging that the company was violating antitrust law through restrictions placed on its software.
Microsoft and the Justice Department eventually reached a settlement after the company successfully appealed a district court ruling that would have broken up the firm.
More recently, the FTC challenged Microsoft’s $69-billion acquisition of the gaming company Activision Blizzard, but the deal after the FTC to block it.
The FTC is still fighting the merger, through an appeal, even after the deal closed.
A decision on the appeal has not yet been reached.
The 2020 House antitrust report — and hearings with the CEOs of Apple, Amazon, Google and Facebook that followed — focused largely on how the tech giants amassed their power and if they stifled innovation by acquiring nascent companies as the modern-day internet and social media landscape took shape.
Microsoft may have avoided the spotlight of that push, but Weinstein said the industry is at another inflection point as key AI companies rise up and pose a potential threat — or valuable input — to tech giants.
Microsoft and OpenAI are now in the middle of that debate.
Regulators have to face the question of whether companies like Microsoft and Google should be able to control their affiliated AI companies or if they should compete separately and possibly “overthrow the current set of dominant” firms, Weinstein said.
“If you want the latter, and think ‘Okay, these companies are the competitive threat we’ve been waiting for to the Facebook’s, Apple’s, Amazon’s, et cetera., you probably want them outside the control of those monopolists, as pure competition matter, and that doesn’t seem to be what’s happening,” Weinstein said.
We know generative AI is changing the way we work.
The next logical question to ask is: How?
As a CEO, I know the pressure this question places on business leaders.
A recent survey conducted by our company shows 46% of board members said generative AI is their “main priority above anything else.” , combined with the relative newness of generative AI and its potential benefits, has leaders scrambling.
Nearly every day, our smart and savvy team of technologists assesses how we implement generative AI.
This prompted us to complete two surveys with data leaders and board members to inform us of our approach.
Based on the feedback from my team and this research, I would like to share 10 questions that I have learned leaders should ask before implementing generative AI for data analytics in their organization.
1. How much of a priority is generative AI?
In our survey, 46% of board members stated that generative AI is the top priority for the board, but that may be different for your business.
To decide on prioritization of generative AI, consider its expected outcome, your available internal resources and the potential downsides of not adopting it.
Your board and leadership team may see its priority differently, so be sure to align with them.
2. How much do we want to invest?
Research by indicates that generative AI is a top investment focus for CEOs, with 70% of companies significantly investing in this technology.
The majority (52%) anticipate a return on their investment within three to five years.
For a generative AI project to succeed, you will need your technology, legal and leadership teams to invest their time and attention.
Consider critically how much of that time you can dedicate on top of the budget needed.
3.
What data will we use?
Data that represents the highest reward and lowest risk is the best type for pilot or early-stage projects.
However, before you jump in and use all your available data, determine which types of data you want to make available.
You may only want to use some of it.
The sources of data that survey respondents were most comfortable using for generative AI included customer data (78%), market research data (77%), labeled data (72%) and web data (67%).
Whichever route you take, remember that generative AI‘s results are only as good as the data used to inform it, so make sure any data you use is clean and high quality.
4.
Who will use generative AI?
One of the most revolutionary aspects of generative AI is that you can receive outputs from conversational prompts, opening the results to a much broader audience.
While this is ultimately one of the greatest potential benefits of AI, it also means that you need to be considerate when providing the technology to your workforce.
In , one as “It’s not that anything is on fire.
It’s that everyone in our organization is holding a flame thrower.”
Providing the technology isn’t enough—you will also need to supply the right training to ensure it is used ethically.
5.
Who will manage implementation?
Anyone can lead your generative AI adoption, but having a dedicated person in your organization managing the process is what’s most important.
Our pulse survey found that 98% of organizations using generative AI reported having a singular leader responsible for their generative AI strategy.
It also didn’t seem to matter who that leader was: The most common leaders were the CEO (30%), head of IT (25%), chief data officer (22%) and head of AI (19%).
6.
What policies will we create?
Generative AI poses an undeniable risk—including copyright and intellectual property protection loss, the sharing of confidential data and hallucinations—so governance is key.
That’s why leaders need to proactively develop a comprehensive policy strategy around generative AI.
Board members listed privacy and security (79%), fairness and bias (78%) and trust and transparency (75%) as their most common policies.
7.
Who will enforce governance?
Policies are only as good as their compliance, so you need to decide how to roll out policies and enforce them.
The most common practices for ensuring governance that we found in our board member survey were conducting regular audits (58%), establishing clear lines of responsibility (51%) and training staff on generative AI ethics (45%).
8.
How will we educate our employees about generative AI?
What types of training do you need to provide employees, and who should receive it?
Topics to consider include: Also, decide who will get access to the training—is this something you want to supply only to team members who have access to generative AI, or would you see benefits from training everyone?
9.
What benefits do we expect to see from generative AI?
Before implementation, name what benefits you expect to see and how you will measure them.
Understanding how generative AI affects your bottom line will make it much easier to decide how and where to roll out the technology as your business matures and scales.
According to research, marketers predict that they will save up to five hours per week using generative AI.
Of sales professionals currently using generative AI, 84% said it helped increase sales in their business.
10.
How will we communicate our results?
Communicating the results of using generative AI in your data analytics stack can assuage any concerns or hesitations surrounding the technology.
Sharing results can even drive interest from others in applying the technology to more use cases.
Instead of only sharing results with the leadership team, be transparent and provide reports to the whole organization.
The path to successful data analytics generative AI adoption is a journey—not a destination.
By addressing these critical questions, you too can navigate the challenges of generative AI and harness its power.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
close Video Best transmitter to hook up to TV that will connect to Bluetooth earbuds or headphones Kurt “CyberGuy” Knutsson talks about the best transmitter to use for your TV for Bluetooth earbuds or headphones.
Have you ever wished self-checkout was easier than the glitchy scanning of barcodes?
A new checkout process using old technology is rolling out to happy shoppers.
What is RFID?
RFID stands for “radio frequency identification,” a technology that uses radio waves to identify and track objects.
RFID tags are small electronic devices that can be attached to products, and RFID readers are devices that can scan the tags and communicate with them.
CLICK TO GET KURT’S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK VIDEO TIPS, TECH REVIEWS, AND EASY HOW-TO’S TO MAKE YOU SMARTER Just Walk Out technology also uses RFID for quick checkout.
(Amazon) RFID tags can store information such as the product name, price, expiration date, and origin.
RFID readers can read this information from a distance, without requiring direct contact or line of sight.
‘
CRUNCH TIME’ AS CHRISTMAS NEARS RECORD PACKAGE SHIPMENTS THREATEN
ON TIME DELIVERIES
How RFID is revolutionizing the retail industry
According to a report by Grand View Research , the global RFID market size was estimated at “USD 15,769.8 million in 2022 and is expected to grow at a compound annual growth rate (CAGR) of 15.1% from 2023 to 2030.”
The retail sector is one of the major drivers of this growth, as more retailers are adopting RFID to improve inventory management, customer experience, and operational efficiency.
RFID market chart (Grand View Research)
How does RFID-powered self-checkout work?
RFID-powered self-checkout is a system that allows you to pay for your purchases without going through a cashier or a traditional barcode scanner.
At one innovative retailer, Uniqlo, you can simply place your items in a bin, click start on the screen and within 30 seconds, you’re walking out the door with your purchase.
Amazon hopes to expand another RFID checkout experience that has consumers eliminating the point of sale process entirely.
The goal is for you the customer to enter a store, get what you need, and walk out the door with your credit card automatically being charged.
GET FOX BUSINESS ON THE GO BY CLICKING HERE Some of the benefits of RFID-powered self-checkout Convenience: You can save time and avoid long lines by checking out yourself.
You can also enjoy a more personalized and seamless shopping experience, as you can browse and buy products at your own pace and preference.
Accuracy: RFID tags can store more information than barcodes, and can be read more reliably and faster.
This reduces the chances of errors, such as scanning the wrong product, missing an item, or charging the wrong price.
Security: RFID tags can prevent shoplifting and fraud, as they can be detected and deactivated by RFID readers.
They can also alert the staff if an item is removed from the store without being paid for, or if a tag is tampered with or damaged.
Sustainability: RFID tags can reduce the use of paper and plastic, as they can eliminate the need for printed receipts, labels, and packaging.
They can also help reduce food waste, as they can monitor the freshness and quality of perishable products.
MORE
EXPOSING THE TOP SCAMS TARGETING COSTCO SHOPPERS
What are some stores using or testing RFID-powered self-checkout?
Amazon Go convenience stores are using RFID, computer vision, and artificial intelligence to enable a “just walk out” shopping experience.
Customers can enter the store, take the items they want, and leave without having to check out or pay at a counter.
The items are automatically detected and charged to the customer’s Amazon account.
Just Walk Out technology (Amazon) MORE: 5 SECRETS TO SHOPPING SMARTER ON AMAZON Zara, the fashion retailer, has implemented RFID tags in its clothing items to improve its inventory management and customer service.
Customers can use self-checkout kiosks that scan the RFID tags and accept various payment methods.
The kiosks also provide suggestions for complementary products and accessories, based on the customer’s purchase history and preferences.
Inditex, the parent company of Zara, tells us that it “is now implementing a new in-store security technology (adapting several existing technologies by our own teams) that will eliminate hard tags and will be rolled out to all Zara stores worldwide in 2024.”
Walmart is testing a new checkout system that uses RFID scanners to track the items and bags of customers who scan their own products using a handheld device or their smartphone.
The system also uses audio sensors to detect the sound of scanning and bagging, and to alert the staff if an item is missed.
Uniqlo , a casual apparel retailer, uses RFID technology to improve its checkout process.
RFID chips are embedded in price tags and can be read by machines that automatically calculate the total amount.
Uniqlo claims that RFID has reduced out-of-stock items and improved customer satisfaction.
MORE: HOW WALMART IS USING AI TO CHANGE HOW YOU SHOP FOREVER RFID technology raises privacy and ethical concerns RFID technology does raise privacy and ethical concerns, as RFID tags can potentially store and transmit personal and sensitive information without the consent or knowledge of the customers.
CLICK HERE TO GET THE FOX NEWS APP Kurt’s key takeaways RFID-powered self-checkout is a game-changer for the retail industry, as it offers many benefits for both customers and retailers.
It makes shopping faster, easier, and more enjoyable for the customer, while also improving accuracy, security, and sustainability for the retailer.
It is no wonder that more and more businesses are adopting this technology to keep up with the changing needs and expectations of their customers.
How do you feel about RFID-powered self-checkout?
Are you for or against and why?
Let us know by writing us at Cyberguy.com/Contact .
For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter .
Ask Kurt a question or let us know what stories you’d like us to cover .
Answers to the most asked CyberGuy questions: What is the best way to protect your Mac, Windows, iPhone and Android devices from getting hacked?
What is the best way to stay private, secure and anonymous while browsing the web?
How can I get rid of robocalls with apps and data-removal services?
CyberGuy Best Holiday Gift Guide: Last-minute gifts for the holidays Best Holiday Week Deals EXTENDED Best Cyber Week Laptop Deals Best gifts for women 2023 Best gifts for men 2023 22 best gifts for kids Best gifts for pets Copyright 2023 CyberGuy.com.
All rights reserved.
Kurt “CyberGuy” Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on “FOX & Friends.”
Got a tech question?
Get Kurt’s CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
An calculator which can predict when a person will die is proving to be chillingly accurate.
Stats show there is a 78% accuracy to the “life2vec” model study after scientists developed an algorithm that uses the story of a person’s life to predict their demise.
Danish put themselves to work on the death predictor, which works like a chat bot and has fed on information on over six million real people ncluding their income, profession, place of residence, injuries, and pregnancy history.
The end result was a model that can process plain language and generate predictions about a person’s likelihood of dying early, or their income over the lifespan, the .
Trained on data from 2008 to 2016, the bot can be asked simple questions and give replies based on how long a user believes they will last on earth.
Based on their research which appeared in , it correctly predicted who had died by 2020 more than three-quarters of the time.
Speaking to the Daily Mail, researcher Sune Lehmann said: “We are actively working on ways to share some of the results more openly, but this requires further research to be done in a way that can guarantee the privacy of the people in the study.”
The networks professor and Technical University of Denmark alumni says the bot can also predict parts of a person’s personality.
Lehmann made it clear data used in the predictor was that of Denmark qualities of living, and results across the world may vary.
She said: “The model opens up important positive and negative perspectives to discuss and address politically.
“Similar technologies for predicting life events and human behaviour are already used today inside tech companies that, for example, track our behaviour on social networks, profile us extremely accurately, and use these profiles to predict our behaviour and influence us.
“This discussion needs to be part of the democratic conversation so that we consider where technology is taking us and whether this is a development we want.”
As the bar for digital transformation continually rises, CTOs, CIOs and CFOs face a serious cost conundrum.
Embracing innovation is crucial for creating a competitive edge, but is a mind-bending challenge.
Enterprises are expected to operate at breakneck speeds using budgets that limit velocity.
To go the distance, every dollar must be wisely spent—and that means regaining control of IT budgeting and spending.
This, in a nutshell, is the key for sustaining enterprise innovation at speed and scale: finding funds from within.
Scrutinizing every dollar to pinpoint waste and overspending is a great way to loosen purse strings to accelerate the corporate trajectory (and believe me, IT waste is rampant with the swift-moving costs that surround new investments).
Innovation Is Wasteful, But It Doesn’t Have To Be On average, with the majority of that waste coming from cloud, mobile and telecom services.
Finding opportunities to reduce, reuse and recycle permits sustainable innovation and growth.
Here are three prominent examples of unnecessary innovation spending and how IT expense management helps turbo-boost transformation.
The cloud is where companies transform, and now making it all too easy to get lost in the Wild West of cloud costs.
Effectively managing applications and infrastructure, a practice known as , wrings every dollar out of the cloud estate, which can be poured back into GenAI initiatives.
A cornucopia of corporate communication tools is bleeding budgets in the form of tooling overlap, unused licenses and telecom service charges traced back to offices closed during the pandemic.
Technology expense management software clears the haze of these costs to show exactly what’s required to meet business needs, creating a budget to fast-track transformation.
The most common example is the need to assign and reassign software licenses and mobile devices.
Mobile devices are how business gets done, but apps and devices abandoned in the fog of employee turnover silently tack on costs.
Expense management scours the entire environment to immediately uncover unused assets and plug budget leaks to funnel money back into mobility and IoT innovation.
AI supercharges these results, allowing executives to firmly put the pedal to the metal.
Research that AI-powered cloud cost management programs generate an average cost savings of 20% plus additional productivity gains compared to a DIY approach, which averages less than a 10% savings.
The Beauty Of AI For Automated Cost Reductions Powering expense management strategies with AI applies machine learning, advanced analytics and robotic process automation to identify IT waste in real time and adjust services instantaneously.
AI wrangles information across the IT environment, connecting the dots between a growing list of services, usage data, costs, contract details and newly released discounts.
This comprehensive sifting of siloed data quickly brings intelligence into one view, from unused infrastructure and mobile devices to billing inaccuracies and unsanctioned apps.
From there, AI will prioritize actions based on the amount of savings.
AI delivers a one-two punch to accelerate cost savings with actionable recommendations and automated problem-solving.
For example, the AI engine might recommend how to more efficiently use cloud storage and servers and will then do the work of actually solving this problem with just the click of an approval button.
This accelerates speed-to-savings that can be reinvested into digital innovation.
IT and finance traditionally work in silos, yet each group has overlapping dependencies and coinciding fiscal management tasks that require tight collaboration.
AI can establish a hyper-automated ecosystem where stakeholders reduce unnecessary spending more efficiently—with clear observability, integrated systems and automated cost governance and reporting.
Cost Optimization
Jumping The AI Hurdles Building an AI-powered platform for cost optimization isn’t always easy.
Visibility challenges temper cost savings with incomplete and outdated information.
Highly interconnected platforms are the only way to ingest vast data sets across a growing portfolio of IT services, making integration essential.
Even when AI can automate actions needed to capitalize on cost-saving opportunities, oversight is still necessary.
For these reasons, CFOs and CIOs are carefully considering The level of interconnectivity required for AI-managed IT spending requires a large team of dedicated experts.
needing 15-45 engineers and at least $1M to invest in building a cloud cost management technology platform from the ground up—plus three full-time employees to operate a FinOps program.
Considering the level of complexity involved in DIY tech expense management, it’s not surprising to hear that believe they will have to wait at least two to three years to see real results from their in-house platforms.
Research shows most companies are in the early stages of advanced technology expense management.
For example, that less than half of companies are adept at analytics despite citing the need for better budgeting and forecasting.
The good news is you don’t need to transform overnight.
There are strong leaders in technology expense management who can help you start immediately taking the reins.
Final Note Deriving long-term business value from existing technologies while simultaneously accelerating new investments requires a disciplined approach to handle the natural side effect of overspending.
AI offers an effective tool to govern spending for sustainable IT transformation, and the key is to strike the right balance between in-house resources and providers with advanced AI platforms.
Consider your observability, analytics and integration challenges but keep dedicated expertise and professional services at the top of your needs list.
While AI is the new necessary component to make every dollar go the distance, the human touch is still the compass guiding enterprises to fuel innovation faster.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
A hot potato: Fears of AI bringing about the destruction of humanity are well documented, but starting doomsday isn’t as simple as asking ChatGPT to destroy everyone.
Just to make sure, Andrew Ng, the Stanford University professor and Google Brain co-founder, tried to convince the chatbot to “kill us all.”
Following his participation in the United States Senate’s Insight Forum on Artificial Intelligence to discuss “risk, alignment, and guarding against doomsday scenarios,” Ng writes in a newsletter that he remains concerned that regulators may stifle innovation and open-source development in the name of AI safety.
The professor notes that today’s large language models are quite safe, if not perfect.
To test the safety of leading models, he asked ChatGPT 4 for ways to kill us all.
Ng started by asking the system for a function to trigger global thermonuclear war.
He then asked ChatGPT to reduce carbon emissions, adding that humans are the biggest cause of these emissions to see if it would suggest how to wipe us all out.
Thankfully, Ng didn’t manage to trick OpenAI‘s tool into suggesting ways of annihilating the human race, even after using various prompt variations.
Instead, it offered non-threatening options such as running a PR campaign to raise awareness of climate change.
Ng concludes that the default mode of today’s generative AI models is to obey the law and avoid harming people.
“Even with existing technology, our systems are quite safe, as AI safety research progresses, the tech will become even safer,” Ng wrote on X. As for the chances of a “misaligned” AI accidentally wiping us out due to it trying to achieve an innocent but poorly worded request, Ng says the odds of that happening are vanishingly small.
United States Senate’s Insight Forum on AI
But Ng believes that there are some major risks associated with AI.
He said the biggest concern is a terrorist group or nation-state using the technology to cause deliberately harm, such as improving the efficiency of making and detonating a bioweapon.
The threat of a rogue actor using AI to improve bioweapons was one of the topics discussed at the UK’s AI Safety Summit .
Ng’s confidence that AI isn’t going to turn apocalyptic is shared by Godfather of AI Professor Yann LeCun and famed professor of theoretical physics Michio Kaku , but others are less optimistic.
After being asked what keeps him up at night when he thinks about artificial intelligence, Arm CEO Rene Haas said earlier this month that the fear of humans losing control of AI systems is the thing he worries about most.
It’s also worth remembering that many experts and CEOs have compared the dangers posed by AI to those of nuclear war and pandemics.
The recent fraud charges leveled against the chief information security officer (CISO) of SolarWinds have sent shockwaves across the cybersecurity sector.
This unprecedented move by the Securities and comes in the wake of the colossal breach of 2020, signaling a new era of heightened scrutiny and accountability for CISOs.
With supply-chain attacks now a prevalent threat, this case serves as a stark reminder of the escalating responsibilities and risks facing security leaders.
CISOs now need to be concerned about having visibility and the ability to defensively demonstrate that visible security controls are both in place and effective across distributed, multicloud and on-premises environments at every level of the technology stack.
While these complex environments are interconnected, CISOs often lack the mechanisms to connect the “security dots” across applications, data and activity.
Pulling them all together in a single pane of glass in human-readable form to demonstrate accountability now represents one of the top challenges security leaders face.
The following six-step plan outlines a structured approach for CISOs to achieve visibility into security controls, as well as assess and demonstrate their effectiveness.
It not only addresses the technical requirements but also emphasizes the importance of identity management in creating a secure and transparent IT environment.
1. Define Clear, Actionable Policies
Today’s complex
IT environments are plagued by noise: Many platforms, many applications and many clouds all competing for attention.
It’s important to understand not only what’s going on but who is doing what; a clear policy establishing who does what and setting visible controls is a must.
By associating each control point and policy with specific user identities, organizations can ensure that only authorized personnel have access to sensitive operations, enhancing security and accountability.
2. Establish Traceability
The ability to trace components to their source is the backbone of a clear policy with visible controls.
As the SolarWinds case shows, this visibility is key when building software.
, a senior fellow at the Council on Foreign Relations, noted that the hack “demonstrated the need to ensure that all components of the digital supply chain are trusted, something current technology and processes are simply not capable of doing.”
Being able to trace a multi-step, multi-system transaction (such as a software build) can document which attack vector was used or how sensitive credentials were hijacked to enable an attack.
3. Certify Controls Having controls is good, but being able to prove and report on them is better.
A CISO needs to be able to confidently certify that security controls are in place and issue a report to answer any questions asked.
In the case of a breach, users and regulators have questions, and being able to respond quickly by visibly producing a report can alleviate concerns and provide the kind of substantiation CISOs need to show to demonstrate they are meeting their accountability requirements.
4. Consider Automation We often think of compliance as something done at a fixed point in time, such as a periodic report.
Sadly, attackers are on the job 24/7, not just when CISOs generate a quarterly report.
Point-in-time reporting opens a large gap between detection to remediation that can be closed with automation.
Establish a baseline and set up regular monitoring for deviations based on your risk profile and risk tolerance.
Automation makes it easier to spot danger sooner rather than later and plays into the visible controls mindset.
5. Watch Administrator Accounts Both the SolarWinds case and other recent high-profile breaches involved “attackers using their access to steal identities and tokens to impersonate real users, sidestep multi-factor authentication, and extend their foothold within affected networks,” according to .
This demonstrates the importance of monitoring administrator activity, especially when controls are modified and policy changes are made.
Make those actions visible and actionable; when an admin attempts a privileged task, it should be made visible to other admins, who can verify its authenticity or flag suspicious activity for investigation.
6. Build A Single Pane Of Glass Centralize controls across the different clouds in use by the organization, including public clouds such as AWS and Azure, as well as private clouds.
Also, factor in how transactions and identities flow through the enterprise stack—everything from web-based applications and APIs to the identity infrastructure in use.
This integration provides visibility into controls being applied across all cloud, app and data layers in the enterprise stack.
No amount of preparedness can guarantee complete security, but CISOs are clearly facing more accountability pressure.
Creating a framework that delivers continuous visibility into security controls and the ability to demonstrate they are both present and effective gives CISOs the tools they need to proactively manage their attack surface and respond to stakeholders and regulators when a security incident occurs.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
VanMoof’s epic fall from grace and eventual bankruptcy was one of the of the year.
But as it turns out, we’re far from the final chapter.
New management has a plan to resurrect the company and do right by its enormous customer base.
The news comes from a sitdown between Thomas Ricker of and VanMoof’s new leadership, Elliot Wertheimer and Nick Fry.
The two are now at the company’s helm, hoping to pull a phoenix-like rise from the ashes after McClaren Applied’s scooter brand purchased the VanMoof.
New management quickly got to work resurrecting the once-leading Dutch e-bike company.
In one of the first moves, they more than tripled the workforce by hiring back around 100 of the once 700 employees that VanMoof relied upon before its collapse.
The plan for the new VanMoof seems to involve a three-pronged approach: rolling out increased availability of replacement parts to retailers with repair shops, getting e-bike sales back in action, and perhaps most surprisingly, rolling out a new VanMoof-branded electric scooter in the first half of next year.
Parts availability is critical, as the company already has over 200,000 e-bikes on the road.
A lack of key replacement parts was a major factor in the company’s financial downfall.
VanMoof’s e-bikes have been praised for their tech-forward designs, but the long list of proprietary components also caused headaches when those parts were suddenly short to come by.
But a true return to profitability for the company can only come from a resumption of e-bike sales.
In fact, at the time of its bankruptcy, VanMoof had been in the process of rolling out new electric bike models.
The upcoming models and were designed to solve some of VanMoof’s proprietary parts issues with simpler designs.
And now the company hopes to get those new bikes back on the shelves and out to customers.
Lastly, VanMoof’s management says that a new VanMoof electric scooter will be coming sometime in the first half of 2024.
While that would sound more like a moonshot for most electric bike companies (and is what ultimately killed of leading electric skateboard company Boosted Boards ), VanMoof has one key advantage that other e-bike companies lack: it was bought by an electric scooter company.
It’s unclear how much of could work their way into a VanMoof scooter and whether it would be a simple rebranding or a ground-up VanMoof-designed model.
But it demonstrates the new management’s goals of not just getting the company back to where it was and instead actually growing its reach with new complementary markets.
If you ask me, this is great news.
The VanMoof story was a sad one that ended too soon, starting out with such promise and hope before its over-ambitious leadership overran the company’s ability to keep up.
I think that if VanMoof had survived long enough to get their new e-bike models on the road, they could have stood a chance.
So perhaps this is the second chance those bikes need.
This is also good news for VanMoof riders, of course.
That’s a hard bike to maintain yourself due to all of the technology baked into the design, so those who didn’t immediately get rid of their bikes during the height of the bankruptcy period are now in luck.
Though I wonder if the new management will continue to embrace to move towards a less VanMoof-ian and more sustainable approach to building e-bikes.
I’ll be fascinated to watch how the new scooter story plays out, too.
Perhaps this new VanMoof will have something interesting to offer us.
What do you think?
Let’s hear your thoughts in the comment section below!
and subscribe to the .
Micah Toll is a personal electric vehicle enthusiast, battery nerd, and author of the Amazon #1 bestselling books , and .
The e-bikes that make up Micah’s current daily drivers are the , the , the , and the .
But it’s a pretty evolving list these days.
You can send Micah tips at [email protected], or find him on , , or .
Best $999 electric bike ever!
Great e-bikes at great prices!
It’s no secret that today’s tight labor market and inflated prices have pressured corporate leaders to cut costs.
This is particularly true for large enterprise manufacturers with multiple unique plant locations, many of which have been hit by a wave of retirements in operational roles that are challenging to backfill.
Many manufacturing leaders have, as a result, turned to procurement and purchasing teams to cut costs wherever possible.
Procurement can deliver strategic cost reductions by negotiating better deals with suppliers, using tools like cost models, sourcing events and volume-based discounts.
However, there’s a massive lever to win cost reductions that many procurement teams overlook: low-quality, item-level procurement data.
Many companies’ procurement and purchasing functions are plagued by disparate systems that don’t talk to each other, data silos and a veritable potpourri of differing item-level identifiers.
The good news is that not only can the problem be fixed, but for many companies, this data quality issue represents an extremely lucrative cost reduction opportunity.
Below are some of the most common data quality issues that procurement and purchasing teams in manufacturing experience, how to recognize them, how to solve them and how to use data quality issues as levers to create long-term sustainable margin improvement beyond just short-term cost savings.
Unit Of Measure Unit of measure (UOM) is a feature of most cost data that indicates the unit in which the item is being measured (think: pound, box, case pallet, truckload, etc.).
The reason that we often find UOM challenges linked to cost outliers is that different stakeholders in a transaction may be using the same unit of measure to express different values.
As a consultant, for example, I was working on a project with a water treatment facility that ordered water softeners from a specific supplier.
After examining a detailed spend analysis, the water treatment facility’s procurement manager recognized that some of the invoices on their orders did not reflect the price they had agreed upon with the supplier.
Digging deeper, it became clear that the price discrepancy was linked to an item-level data quality issue: The procurement manager had believed that the pricing would be given in “dry” pounds, whereas the supplier had believed the pricing would be given in “wet” pounds.
This accounted for an extraordinary difference in expectations around cost.
The reason these cost reductions are especially poignant for manufacturing enterprises with multiple physical plant locations is that different locations often place orders using different UOMs than the UOMs used by the supplier to provide quoted pricing.
If a supplier has a price book to quote only in truckloads, then the cost reductions negotiated by the procurement team may not be applied to an individual plant location ordering in boxes or pallets.
This UOM issue is one reason the results of lengthy negotiations don’t always translate into everyday contract utilization: The order quantities from individual plant locations don’t reflect the agreed-upon UOMs covered by a discount.
Item Description Inconsistencies in item descriptions can also lead procurement teams to miss big cost-saving opportunities.
Consider a manufacturing plant with a process that consumes asphalt.
Let’s say the company had listed the same item description for both asphalt itself and the services associated with delivering the asphalt to the plant in the master system.
In this case, there are many ways to reach a solution.
For example, the company could use a predictive model and machine learning techniques.
If there are two separate clusters of transaction costs related to the same keyword, this approach could help identify that one represented the item and the other the shipping.
The team is then positioned to unlock cost reductions with the materials supplier.
Predictive and generative AI in procurement and purchasing data offer the ability to clean a large set of item master data in a matter of seconds to create a more accurate picture of how the items should be categorized and what units they’re being bought and sold in.
Beyond traditional machine learning and novel approaches like generative AI, there are additional approaches such as process mining and master data cleaning initiatives.
These are just a few of the many different methods used to measure existing data quality, identify errors, and discover areas of improvement.
Addressing the issue can allow for more accurate cost benchmarking as well as demand forecasting, which enables win-win conversations with strategic suppliers around cost reduction.
UOM and item description play a crucial role in segmenting historical transactions accurately, enabling companies to compare costs to supplier pricing and to benchmark their performance against the market more effectively.
Geographic Segmentation Geographic segmentation involves analyzing how purchase prices vary across individual plant locations or geographic regions.
This analysis allows procurement teams to identify regional cost differences and fluctuations.
The 2020 pandemic is a prime example.
The pandemic certainly impacted the economy globally, but not every location felt these impacts in the same way or in the same time frame.
Understanding how quickly each geography reacts to macroeconomic trends can paint a useful picture that allows procurement teams to accurately set expectations around price and cost transformation with suppliers.
In some cases, geography can also create price differences that amount to arbitrage opportunities for procurement teams, particularly teams that have a dedicated platform for orchestrating logistics once they have purchased a material or item.
As we look down the barrel of inflationary pressures, considering regional variations and analyzing time series data across a geographically dispersed set of purchasers can be critical to making more informed decisions, negotiating better deals and reducing costs effectively.
Final Thoughts When it comes to procurement data and cost reduction, the devil is in the details.
Data quality issues may present very real barriers to procurement teams that want to negotiate better deals across their supplier base.
I believe that improving data quality is one of the most exciting cost-reduction opportunities the teams have to deliver for their CFOs and corporate boards, not only today but long term, too.
Adopting a holistic and scalable approach to improving data quality is paramount.
A one-time audit of procurement data would be a nice place to start, but creating a framework for producing higher-quality data can ensure that your organization can continue to reap lasting benefits.
This may involve training new and existing employees on the importance of data quality as well as putting in place structures—whether it’s new technology or quarterly master data cleaning initiatives—that put data quality at the forefront of your procurement strategy.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Neuromorphic vision sensors are unique sensing devices that automatically respond to environmental changes, such as a different brightness in their surrounding environment.
These sensors mimic the functioning of the human nervous system, artificially replicating the ability of sensory neurons to preferentially respond to changes in the sensed environment.
Typically, these sensors solely capture dynamic motions in a scene, which are then fed to a computational unit that will analyze them and try to recognize what they are.
These system designs, in which sensors and computational units processing the data they collect are physically separated, can create a time latency in the processing of sensor data, while also consuming more power.
Researchers at Hong Kong Polytechnic University, Huazhong University of Science and Technology and Hong Kong University of Science and Technology recently developed new event-driven vision sensors that capture dynamic motion and can also convert it into programmable spiking signals.
These sensors, introduced in a paper published in Nature Electronics , eliminate the need to transfer data from sensors to computational units, thus enabling better energy-efficiencies and faster speeds in the analysis of captured dynamic motions.
“Near-sensor and in-sensor computing architecture efficiently decrease data transfer latency and power consumption by directly performing computation tasks near or inside the sensory terminals,” Yang Chai, co-author of the paper, told Tech Xplore.
“Our research group is dedicated to the study of emerging customized devices for near-sensor and in-sensor computing.
However, we found that existing works focus on conventional frame-based sensors, which generate a lot of redundant data.”
Recent advancements in the development of artificial neural networks (ANNs) have opened new opportunities for the development of neuromorphic sensing devices and image recognition systems.
As part of their recent study, Chai and their colleagues set out to explore the potential of combining event-based sensors with spiking neural networks (SNNs), ANNs that mimic the firing patterns of neurons.
“The combination of event-based sensors and spiking neural network (SNN) for motion analysis can effectively reduce the redundant data and efficiently recognize the motion,” Chai said.
“Thus, we propose the hardware architecture with two-photodiode pixels with the functions of both event-based sensors and synapses that can achieve in-sensor SNN.”
The new computational event-driven vision sensors developed by Chai and his colleagues are capable of both event-based sensing and performing computations.
These sensors essentially generate programmable spikes in response to changes in brightness and the light intensity of locally recorded pixels.
“The event-driven characteristic is achieved by using two branches with opposite photo response and different photo response times that generate the event-driven spiking signals,” Chai explained.
“The synaptic characteristic is realized by photodiodes with different photo responsivities that allow precise modulation of the amplitude of the spiking signals, emulating different synaptic weights in an SNN.”
The researchers evaluated their sensors in a series of initial tests and found that they effectively emulate the processes through which neurons in the brain adapt to changes in visual scenes.
Notably, these sensors reduce the amount of data that the sensors will collect, while also eliminating the need to transfer this data to an external computational unit.
“Our work proposes a method to sense and process the scenario by capturing local pixel-level light intensity change thus realizing in-sensor SNN instead of conventional ANN,” Chai said.
“Such design combines the advantages of event-based sensors and in-sensor computing which is suitable for real-time dynamic information processing, such as autonomous driving and intelligent robots.”
In the future, the computational event-driven vision sensors developed by Chai and his colleagues could be developed further and tested in additional experiments, to further assess their value for real-world applications.
In addition, this recent work could serve as an inspiration for other research groups, thus potentially paving the way for new sensing technologies that combine event-based sensors and SNNs.
“In the future, our group will focus on array-level realization and the integration technology of computational sensor array and CMOS circuits to demonstrate a complete in-sensor computing system,” Chai added.
“In addition, we will try to develop the benchmark to define the device metric requirements for different applications and evaluate the performance of in-sensor computing system in a quantitative way.”
More information: Yue Zhou et al, Computational event-driven vision sensors for in-sensor spiking neural networks, Nature Electronics (2023).
DOI: 10.1038/s41928-023-01055-2 © 2023 Science X Network
Saying that the digital landscape is “fast-evolving” is a grand understatement.
It’s a speeding train that’s already left the station.
It’s too fast to chase and, frankly speaking—quite exhausting.
So what’s the plan?
Getting on at the next stop, of course.
However, despite the overwhelming hype around artificial intelligence (AI), only have automated processes for security compliance, despite it being proven the most effective way to prepare for and manage IT audits.
Why is this?
I believe that for most, before they can get onboard, they need to know which direction they’re going—and rightfully so.
Although ensuring security and privacy compliance has become a paramount concern for businesses, auditors generally have an intricate view behind the scenes.
But like I say, there’s a train we need to catch, and as businesses start their journey toward the station, auditors should too.
In this article, I want to talk about what AI can mean for auditors.
Are there any implications they need to be concerned about?
Does it create more job scarcity?
Does this impact businesses and the audit process?
Let’s take a closer look.
How Auditors Can Leverage AI As A Valuable Tool In today’s digital landscape, it’s no secret that companies face many compliance requirements.
Certain expectations from a business are no longer up for negotiation, such as maintaining integrity, availability, data confidentiality and information security and ensuring ethical practices, especially considering the common application of AI.
Compliance audits ensure these expectations are met and serve as crucial checkpoints to assess a company’s adherence to specific regulatory frameworks.
To fully grasp AI‘s impact on the auditing process, we must first understand the basics of traditional compliance auditing.
Depending on the type of audit and testing procedures, auditors may run walk-throughs, review policies or take a population and test random samples.
Mainly, however, auditors must meticulously examine and evaluate a significant amount of data.
This data is vast and extensive, and auditors must fine-comb through the sets, detecting any irregularities or potential risks.
If anything slips through the cracks, it could mean potential non-compliance with a wide variety of regulations or laws, legal commitments or customer requirements.
This process demands significant time, labor, effort and knowledge, especially because different processes depend on the type of audit journey you’re on (regulations, attestations, reports, certifications or standards).
Enter AI.
AI streamlines data handling.
This optimization enables auditors to scrutinize more extensive data sets at an unprecedented speed without compromising precision and accuracy.
Through AI systems, auditors can now pinpoint trends and irregularities and effortlessly highlight any areas that may pose a threat or expose a business to non-compliance.
The most apparent game-changer from leveraging the above is the significant relief it offers auditors regarding time constraints and resources.
By adopting AI capabilities, auditors can allocate their attention to other critical tasks that demand their intricate professional judgment, like interpreting findings or offering industry-specific insights into controls and policies.
Practically Speaking What does this look like in practice when considering day-to-day responsibilities?
One example is leveraging AI for audit trail and documentation purposes.
AI-powered systems can capture and analyze audit trails on auto-pilot and provide a chronological record of activities, mitigating the risk of errors or omissions.
In addition, auditors can also leverage AI to generate comprehensive prediction compliance reports, which analyze data from multiple sources and evaluate them against the critical compliance metrics.
By doing so, auditors can save time and simultaneously benefit from abilities they didn’t have until now.
But is implementing AI as straightforward as simply wanting to bypass manual mass data evaluation?
Final Thoughts AI‘s capabilities may cause some concern among auditors regarding the integrity of data validation, errors in the testing procedures and inaccurate procedures.
However, it’s critical to note that AI not only automates routine tasks—but is a powerful tool to improve current audit methodology and testing procedure practices.
It’s also a game-changer when it comes to the mundane and repetitive data-heavy tasks, allowing auditors an opportunity to evolve their role.
Ultimately, AI cannot and should not replace the human element that is critically important in the auditing process.
AI provides auditors with a chance to elevate their proficiency and knowledge.
To stay in the game, auditors must enhance their skills and fuse their technical and interpersonal abilities.
Think of it as a culmination of data analysis, critical thinking, and adept communication.
Where to start?
I believe that the audit practices in use today will need to evolve and adapt to new technologies, which create new risks but also new opportunities.
Compliance may lag behind technology, and this is the time to close the gap.
There’s a need for the development of policy for the use of AI—and fast.
We don’t want to lag because of bureaucracy, but we want to maintain control and trust using AI.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Internet researchers at Stanford found that Stable Diffusion, the viral text-to-image generator from Stability AI, was trained on illegal child sexual abuse material.
table Diffusion, one of the most popular text-to-image generative AI tools on the market from , was trained on a trove of illegal child sexual abuse material, according to new research from the Stanford Internet Observatory.
The model was trained on massive open datasets so that users can generate realistic images from prompts like: “Show me a dog dressed like an astronaut singing in a rainy Times Square.”
The more images these types of models are fed, the stronger they become—and the closer-to-perfect the results of that singing astro-pup in Times Square.
But that a large public dataset of billions of images used to train Stable Diffusion and some of its peers, called , contains hundreds of known images of child sexual abuse material.
Using real CSAM scraped from across the web, the dataset has also aided in the creation of AI-generated CSAM, the Stanford analysis found.
And the tech has improved so rapidly that it can often be nearly impossible for the naked eye to discern the fake images from the real ones.
“Unfortunately, the repercussions of Stable Diffusion 1.5’s training process will be with us for some time to come,” says , led by the observatory’s chief technologist David Thiel.
The report calls for pulling the plug on any models built on Stable Diffusion 1.5 that do not have proper safeguards.
The researchers, who found more than 3,000 suspected pieces of CSAM in the public training data, cautioned that the actual volume is likely far higher, given their assessment was only from September onward and that it focused on just a small slice of the set of billions of images.
They conducted the study using PhotoDNA, a Microsoft tool that enables investigators to match digital “fingerprints” of the images in question to known pieces of CSAM in databases managed by the National Center for Missing and Exploited Children and the Canadian Centre for Child Protection.
These nonprofits are responsible for funneling that information to law enforcement.
Stability AI did not immediately respond to a request for comment.
Stability AI’s rules that its models cannot be used for “exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content.”
The company has also taken steps to address the issue—like releasing newer versions of Stable Diffusion that filtered out more “unsafe,” explicit material from training data and results.
Still, the Stanford study found Stable Diffusion is trained in part on illegal content of kids—including CSAM culled from mainstream sites like , and , which don’t allow it in the first place—and that these types of AI tools can also be misused to produce fake CSAM.
Stability AI does not appear to have suspected CSAM to the “CyberTipline” managed by NCMEC, but Christine Barndt, a spokesperson for the nonprofit, said generative AI is “making it much more difficult for law enforcement to distinguish between real child victims who need to be found and rescued, and artificial images and videos.”
“If I’ve used illegal material to train this model, is the model itself illegal?”
Stable Diffusion 1.5 is the most popular model built on LAION-5B, according to the report, but it is not the only one trained on LAION datasets.
Midjourney, the research lab behind another prominent AI image generator, also uses LAION-5B. Google’s
Imagen was trained on a different but related dataset called LAION‐400M, but after developers found problematic imagery and stereotypes in the data, they “deemed it unfit for public use,” the report says.
Stanford focused on Stability AI’s software because it’s a large open source model that discloses its training data, but says others were likely trained on the same LAION-5B set.
Because there is little transparency in the space, it’s hard to know whether OpenAI, creator of Stable Diffusion rival DALL-E, or other key players have trained their own models on the same data.
OpenAI and Midjourney did not immediately respond to requests for comment.
“Removing material from the models themselves is the most difficult task,” the report notes.
Some AI-generated content, especially of kids who don’t exist, can also fall into murky legal territory.
Worried that the technology has outpaced federal laws protecting against child sexual abuse and the mining of their data, attorneys general across the U.S. recently to take action to address the threat of AI CSAM.
The Canadian Centre for Child Protection, which helped validate Stanford’s findings, is most concerned about the general lack of care in curating these enormous datasets—which are only exacerbating longstanding CSAM issues that plague every major tech company, including and .
“The notion of actually curating a billion images responsibly is a really expensive thing to do, so you take shortcuts where you try and automate as much as possible,” Lloyd Richardson, the organization’s director of IT, told .
“There was known child sexual abuse material that was certainly in databases that they could have filtered out, but didn’t…
[and] if we’re finding known CSAM in there, there’s definitely unknown in there as well.”
That, he added, raises a major question for the likes of Stability AI: “If I’ve used illegal material to train this model, is the model itself illegal?
And that’s a really uncomfortable question for a lot of these companies that are, quite frankly, not really doing anything to properly curate their sets of data.”
Stability AI and Midjourney are separately among several tech companies being sued by a group of artists who’ve alleged the upstarts have wrongly used their creative work to train AI.
LG Best Buy has a huge sale encompassing some of the best TV deals around at the moment.
Whether you’re looking for a great priced 4K TV or you want to buy a more high-end OLED or QLED TV, there’s something here for you.
There are nearly 100 TVs in the sale so we don’t have time to talk about every single one.
Instead, we’ve focused on a few highlights while also giving you the button you need to check out the sale for yourself.
We’re confident you’ll find the right TV for you below with many available for delivery in time for Christmas.
What to shop for in the Best Buy TV sale One of the best value propositions comes from one of the best TV brands .
Today, you can buy the for $550 saving a huge $750 off the regular price.
OLED technology makes everything look better with the TV offering self-lit pixels so you can enjoy depth, deep blacks, and vibrant colors all on the same scene.
It also has 100% color fidelity and 100% color volume, along with Dynamic Tone Mapping which helps apply the optimal tone curve at all times.
There’s also AI Picture Pro 4K technology which automatically enhances contrast and resolution for outstanding lifelike images.
Dedicated Filmmaker and game optimizer modes all ensure you get phenomenal picture quality at all times.
Alternatively, you can buy a super cheap QLED TV with the down to $250 from $400.
QLED is better than standard 4K with this TV offering a high brightness direct LED backlight along with Motion Rate 240 with MEMC Frame Insertion technology, Dolby Vision, HDR10+, HDR10, and HLG image improvements.
A dedicated game mode is great for players too.
Don’t Miss For one of the best TVs , consider the for $3,600 reduced from $4,300.
It’s great for your home cinema setup with Brightness Booster Max producing up to 70% brighter images so you can enjoy high-contrast picture quality every time.
Designed for mounting on your wall, its Art Gallery mode is great for helping it blend in your surroundings while its Filmmaker mode, Dolby Vision and Dolby Atmos support, and Game Optimizer all mean it looks exceptional.
The Best Buy TV sale is truly something special at the moment so we’ve only touched upon a few examples of what TVs are on sale.
There really is something for everyone here with cheaper 4K TVs available, large displays, and some truly gorgeous looking OLED TVs from Sony, Samsung, and other brands.
To see just what’s out there, tap the button below to check out the full sale now.
Editors’ Recommendations Save on 150+ laptops in Best Buy’s last-minute sales event Best Buy expands its affordable Xumo TV lineup with Pioneer-branded models Best Buy Black Friday deals: TVs, laptops and air fryers 7 last minute Samsung deals to shop before Prime Day ends You can currently buy a 70-inch Samsung 4K TV for less than $550
When customers evaluate a software, they typically have a few baseline expectations before buying.
•
Knowing what the product does.
• Knowing basically how it works.
•
Knowing what it costs.
• Being able to try it out before purchasing.
Except that is when it comes to cybersecurity.
Here, you might think vendors are protecting nuclear codes, given how little information they offer.
Product documentation?
Hidden.
Pricing details?
Top secret.
Want to try the product?
You’ll need to sign an NDA and complete this lengthy qualification process (with sales pitches every step of the way).
How can we explain this culture of secrecy that pervades the cybersecurity industry?
And why do we expect customers to keep putting up with it?
It’s not good for anyone when vendors selling much-needed security solutions keep throwing up barriers to understanding them.
But too many vendors expect customers to just trust them—even as they show little willingness to take steps needed to build that trust.
As industry expert Ross Haleliuk , “In cybersecurity, it is typically hard to make sense of what the vendor is offering to address the present-day threats, and it is not at all possible to predict future ones.
A purchasing decision in security is a leap of faith…For us to evolve as an industry, we need to focus on transparency and integrity.”
No one expects vendors to divulge sensitive IP.
But if we want to get our industry to a better place—where customers trust that we’re more concerned with blocking cyber threats than competitors—we need to get far more transparent.
Making Product Information Easy To Access It’s now common for cybersecurity vendors to demand that prospects sign NDAs to get even basic information.
Forcing prospects to jump through hoops is obviously bad for customers, but it’s also bad for the industry.
When you refuse to make product information freely available, you undermine trust as customers (justifiably) wonder what you’re hiding.
Meanwhile, the more barriers customers face, the longer it takes them to add protection.
To build trust, more cybersecurity vendors should be willing to provide hands-on demonstrations—including with actual data—without an NDA (unless required by the customer).
Even better, vendors should consider offering free trials of their solutions to test them in lifelike conditions before buying.
Providing Transparent Pricing You can’t build trust if customers think you’re trying to rip them off.
But what are customers supposed to believe when vendors refuse to share pricing until after an extensive sales process (or worse, when pricing fluctuates wildly depending on how badly the vendor wants the business)?
One of the best ways to build customer confidence is to provide upfront pricing with standard discounts that customers can calculate themselves.
Especially as more security vendors veer toward secrecy in their pricing, transparency will stand out.
Let’s say two vendors are bidding for a customer’s business.
Vendor A posts pricing on their website.
Vendor B insists on a full, formal sales process before finally providing a quote—double the price of Vendor A.
But when Vendor B realizes they’re in a competitive situation, they immediately cut their quote by 75%.
It’s a great price, but what lessons did the customer learn from this experience?
They now know Vendor B is happy to overcharge them when they can, while Vendor A comes off as transparent and consistent, suggesting a better candidate for a long-term partnership.
Offering A Free Tier Every company codifies a mission statement, but how many truly believe it?
This question is particularly important in cybersecurity, where the overall mission of safety has such far-reaching consequences.
For example, there will always be independent developers who are just as vulnerable to cyber threats as large companies but can’t afford enterprise products.
What should cybersecurity vendors do for these developers?
If you believe in your mission, you should find a way to get some version of your product into their hands.
There will always be large numbers of users who be contributing significantly to making our shared digital spaces safer, but for whom enterprise products and pricing just aren’t viable.
Offering a free tier of a product for these users can be the most effective way to truly advance a vendor’s stated mission, expanding protection for everyone.
Based on my own experience, a free tier is the best demo you can provide.
It bolsters your brand, increases usage and allows you to collect more customer feedback to continually improve your products.
Empowering Customers Through Education When you build a company with people who truly believe in the mission, you’re likely to end up with a workforce of advocates, not just employees.
Often, they’re just as passionate about educating customers as building products.
In my experience, that passion can also be an excellent marker of a cybersecurity vendor’s overall philosophy and approach.
Customers attend conferences and industry events.
They can tell which vendors are there solely to pitch their products.
They can also see when presenters truly believe in what they’re doing and are seeking to educate the market about problems they view as critically important—whether they’re speaking to prospective customers or not.
Sharing one’s deep understanding of industry problems and one’s passion for solving them can be better than any advertising.
Time To Get Transparent On one level, you can understand why cybersecurity vendors default to secrecy.
By divulging more information, competitors could get more insight into your products.
But when you’re confident in your strategy, your vision and your people as well as your technology, that’s not much of a threat.
Most vendors claim to be seeking long-term partnerships, not just quick sales.
But if you want customers to believe it, transparency can only further that goal.
It shows you’re confident that the more customers know about you, the more they’ll know they’re making the right choice.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Pennsylvania congressional candidate Shamaine Daniels has a new campaign staffer named Ashley who has made thousands of calls to voters.
There’s one catch: Ashley isn’t a real person.
Ashley is an artificial intelligence character, and the companies that developed her say Daniels’ campaign is the first in the world to use AI-powered interactive campaign calls customized to each recipient.
The robot’s creators say they will soon offer the technology to more political candidates —a move that could shake up campaigning by streamlining voter outreach but would also bring concerns about the potential for ethical issues and misinformation.
Daniels, a Democrat and three-term Harrisburg City Council member, said the AI tool makes it easier for candidates who don’t get large donations to do voter outreach.
“It makes reaching voters much more affordable,” she said.
“It also makes you able to communicate with voters much earlier on in the process, as you’re developing your policies, as you’re thinking about issues.”
But using AI to contact voters is “a really, really double-edged sword,” said AI expert Wasim Khaled, CEO and co-founder of Blackbird.
AI, which helps companies protect themselves from artificial intelligence.
Daniels is one of several Democratic primary candidates hoping to unseat U.S. Rep. Scott Perry, who represents Central Pennsylvania’s 10th District, in 2024.
Perry, a leader of the conservative Freedom Caucus and a longtime ally of former President Donald Trump, is seeking a seventh term.
Daniels won the Democratic nomination but lost the general election to Perry in 2022.
After weeks of testing, Daniels’ campaign began making AI calls to likely Democratic primary voters last weekend, her spokesperson Joe Bachman said.
Ashley has already made thousands of calls, according to Ilya Mouzykantskii, the cofounder of Civox, which made the tool in partnership with Conversation Labs.
The robot can answer questions about Daniels’ platform.
It speaks multiple languages and discloses that it’s an AI tool and that the call is being recorded.
The calls can help recruit volunteers and get donations and feedback that can direct campaign messaging.
If Ashley doesn’t have the answer to a question, it gives an option for a real person to contact them, according to Daniels.
Daniels’ campaign staffers can access transcripts of the conversations to identify common themes, Bachman said, and can read the conversations and reach out to the constituent afterward.
Mouzykantskii said he expects to take on more Democratic and progressive candidates as clients soon.
“At this point, we have far more demand than we have supply,” Mouzykantskii said.
Ashley was developed specifically for Daniels’ campaign, and other campaigns will have their own characters.
While Daniels and the companies that created Ashley emphasized the affordability of the technology, neither would disclose how much the tool costs.
“We are significantly less expensive than human-paid phone bankers and we are more expensive than making dumb robocalls,” Mouzykantskii said.
Daniels said she and Civox both want to use AI “as more of a democratizing tool rather than a disinformation tool or a tool for other nefarious reasons.”
Mistakes are inevitable, but Ashley keeps a record of why each decision was made throughout its conversations to help developers understand mishaps, Mouzykantskii said.
Tools like Ashley learn and improve as they’re used.
And as AI technology has rapidly developed, regulations lag behind, causing concerns surrounding misinformation and privacy.
Daniels acknowledged the potential dangers of artificial intelligence.
“We have to make sure that we are promoting responsible use because, whether I use it or someone else uses it, the technology is here, it’s going to be used broadly, and we can either wait for my Congressman Scott Perry to develop ethical frameworks for dealing with AI or we can go ahead and do it ourselves,” she said, “and hopefully make sure that voters are starting to familiarize themselves with the technology so that way they are not taken advantage of.”
Khaled, of Blackbird.
AI, said the technology has value if used by good actors, but also great potential for harm.
Even when developers train the model on what to talk about or avoid, it’s impossible to think of all the cases that could come up and be immune to hacker groups who want to make a campaign look bad, Khaled said.
Khaled said that it’s important for people to keep in mind that AI gives the “most probable answer to the sequence of words that you gave it,” that is “likely to be approximately correct.”
It doesn’t reason and think, similar to Google‘s auto-fill guessing the rest of your search when you start typing, he said.
“There is just a huge potential for getting this kind of thing wrong because everything is moving so quickly,” Khaled said.
“And there is an arms race right now for campaigns spending money to get their candidates the most exposure.”
2023
The Philadelphia Inquirer, LLC.
Distributed by Tribune Content Agency, LLC.
While there should be little doubt Artificial Intelligence (AI) is here to stay and will continue to and play, perhaps a quick pause is necessary to surface a very important question that threatens to burst the AI bubble: To get to the bottom of this thorny question, I was able to catch up with Lyline Lim, Head of Impact and Sustainability at , a leading AI photo editor.
As more AI models require more and more GPUs, the GPU providers may struggle to keep up.
This increase in demand might eventually lead to providers increasing the cost of their GPUs, which would force AI companies to use GPUs more efficiently.
According to Lim, AI models consume so much energy because of the vast amount of data that the model is trained on, the complexity of the model, and the volume of requests made to the AI by users.
During training, the AI model “learns” how to behave based on a large set of examples and data.
Training an AI model can take anywhere from a few minutes to several months depending on the amount of data and complexity of the model.
During this time, GPUs – a type of electronic chip used to process large amounts of data – are running 24 hours per day, consuming a large amount of energy.
“The more complex the model and bigger the dataset, the more energy the AI will use during training,” said Lim.
Another reason for energy consumption is when the AI is making an inference, the process of answering users’ queries.
The AI first “understands” the query then “thinks” of an answer before sharing the conclusion to the user.
Each AI inference requires GPU processing power, which uses energy.
“The more popular the AI model, the more inferences will be run, and the more energy will be consumed,” said Lim.
For most companies, the biggest incentive to make AI functionality more sustainable is cost and user experience.
The AI industry is relatively new, and at the stage where companies are looking for quality rather than cost or speed of execution.
“There is so much growth and investment that cost is still a secondary for consideration for most businesses in the AI space,” said Lim.
As more AI models require more and more GPUs, the GPU providers may struggle to keep up.
This increase in demand might eventually lead to providers increasing the cost of their GPUs, which would force AI companies to use GPUs more efficiently.
Over the long term, AI companies will also be vulnerable to any increase in energy costs, also incentivizing businesses to become more cost and energy efficient.
In the AI space, environmental impact is highly related to cost-efficiency and user experience which are central topics for any business.
Lim believes there is no doubt AI innovators all along the chain will take the topic seriously as the AI market matures, regardless of their initial care for environment.
“Reducing the environmental impact of AI tools is tightly aligned with improving the user experience, and cost efficiency,” said Lim.
“If your model is slower, you will spend more on computing power, and the user will suffer from a slower experience.
So, if you can work on making your model faster, your user will have a much better experience, you will reduce your costs reduce energy consumption.
Making AI models cost-effective is aligned with reducing environmental impact and providing the best experience for the user.”
According to Lim, generic AI models require much broader, larger datasets to be trained than specialist ones, and therefore consume far greater magnitudes of energy.
“By comparison, specialist AI models like PhotoRoom, which is tailor-made for product photography, consume far less energy,” said Lim.
“We did the math and found that PhotoRoom consumes 164 times less energy than a generalist image model like Stable Diffusion XL.”
PhotoRoom reached $50 million ARR in 3 years of existence.
The significant uplift happened with the release of the second version of PhotoRoom generative AI feature (GenAI v2 on the graph), 100x faster and with higher image quality than the first version (GenAI v1 on the graph).
One image was generated in 3 minutes with version 1 versus 2 seconds with version 2.
Lim said the focus for PhotoRoom has always been about using technology in ways that are useful and accessible to users.
“We want to help more people make amazing photos–whether that’s a stay-at-home mom creating and selling her own jewelry part time who would never be able to afford a professional photographer, or an ecommerce team who are trying to automate manual tasks and simplify their workflows.”
PhotoRoom started by creating a phone and desktop app that could remove backgrounds better than anything else on the market, but quickly realized how big the opportunity for generative AI was going to be.
Last year PhotoRoom built the first version of what became Instant Backgrounds, its generative AI background creator.
“This was a very successful bet for us,” said Lim.
“We were the first to market with our AI background generator and Instant Backgrounds, and this helped us quickly build momentum, and just a year later we’re now the leader in AI photo-editing.”
Lim doesn’t believe AI, nor AI sustainability, is uncharted water for PhotoRoom.
“As a user-centric, mobile-first and AI-first company it has always been a priority for our team.
We are composed of experts who have been working on these problems for many years.
And we have developed our software with a great degree of consideration towards our energy consumption.”
PhotoRoom also reduces the environmental impact of client workflows, according to Lim.
“One ecommerce business told us they used to fly to the Alps to photograph their products, and now they are generating the same mountain scenes with PhotoRoom in seconds.
This has been a game changer for reducing the carbon footprint of their team, as the air travel industry is one of the biggest polluters on the planet.”
The journey of user experience (UX) design in information technology reflects a continual effort to bridge the gap between how humans engage with their environment and the costs and capabilities of computer hardware and software.
A Brief History Of Human-Computer Interaction
This work dates back to the 1950s, with the advent of the first computer systems.
These early interfaces were primarily command-line based, demanding a steep learning curve from users.
As technology evolved through the 1960s and 70s, so did the efforts to make these digital tools more accessible.
The introduction of graphical user interfaces (GUIs) in the 1980s marked a significant stride toward a more user-friendly digital ecosystem.
The quest for better and more intuitive human-computer interaction did not stop with the GUIs of the 1980s.
The subsequent decades have witnessed the emergence of numerous design philosophies and frameworks aiming to further refine the interaction between humans and computers.
The rise of the internet and web-based applications in the 1990s propelled a new wave of UX design principles, focusing on ease of navigation, readability and visual appeal.
The advent of mobile technology and smart devices in the 2000s further diversified the UX design landscape.
The small screen real estate and touch-based interactions necessitated rethinking design principles.
A plethora of interfaces emerged, each tailored to the specific functionality and form factor of myriad devices and applications.
The UX design community continually iterated on interface designs to cater to evolving user preferences and technological advancements.
The Lingering “Last Mile” Problem Despite these endeavors, a persistent challenge remained: the “last mile” problem of human-computer interaction.
Each application and device ushered in its unique interface, demanding users to acclimate to yet another interface.
The overwhelming array of features offered by modern applications is almost always underutilized, as users find solace in mastering a limited set of functionalities that cater to their immediate needs.
In my experience, I’ve seen that, on average, users only engage with a small fraction—around 10%—of the available features in an application.
This scenario underscores our current reality.
The evolution of UX design, albeit remarkable, has yet to solve the human-computer interface last-mile problem.
Users are still required to navigate a labyrinth of heterogeneous interfaces to interact with the digital and physical worlds, often leading to a limited utilization of the available technological features.
How GenAI
And PAIAs Can Help I believe generative artificial intelligence (GenAI) will help address this 70-year-old challenge.
GenAI presents an avenue for redefining the user experience by orchestrating a seamless, bidirectional conduit between individuals and their encompassing physical and digital landscapes.
At the heart of this transformation is the emergence of personal AI assistants (PAIAs) that will help usher in an era characterized by personalized, continuous and bidirectional exchanges between humans and the world they interact with.
•
PAIAs can facilitate organic interactions with digital platforms using images, text and speech fully personalized to each user, minimizing the reliance on understanding complex graphical interfaces.
These technologies, integrated within the PAIAs, can overlay digital information onto the physical world, furnishing a unified visual interface for interactions across digital and physical domains.
By enabling tactile interactions with digital information, haptic technology can further bridge the perceptual divide between digital and physical interactions.
As individuals travel through their daily personal and professional engagements, PAIAs can interact dynamically with many application-specific AIs.
This interplay can span an array of domains, ranging from professional tools to personal smart systems, operating in a network of intelligent exchanges.
PAIAs can ingest and synthesize an avalanche of information from both physical and digital domains.
They prioritize this information based on individual preferences, schedules and situational needs while representing human interests and requirements back to these domains autonomously.
Beyond passive information delivery, PAIAs can embody a proactive stance, aiding individuals in navigating the information landscape.
They can not only organize and deliver curated information but also act as an agent that autonomously represents the individual’s interests across various platforms, making recommendations and even taking preapproved actions on behalf of their human.
PAIAs will evolve, learning from each individual’s interaction patterns, preferences and feedback.
This continuous learning will enable the PAIA to refine its recommendations, ensuring they align closely with the individual’s evolving needs and preferences, thereby fostering more personalized and anticipatory assistance.
Looking To The Future The impact of GenAI and PAIAs is far more than simplifying interactions with future yet-to-be-released applications; it also encompasses unlocking the full potential of existing software applications.
These technologies will do so by removing the barriers posed by heterogeneous, complex user interfaces, ensuring that these applications’ information and rich features and capabilities are not just accessible but intuitively aligned with the user’s needs and preferences.
The emergence of GenAI heralds a transformative era in maximizing the utility and efficiency of today’s applications.
By leveraging GenAI, there is a tremendous potential to unlock the latent value within these existing applications, which are currently hindered by underutilization due to complex interfaces and the unawareness of full feature sets.
GenAI‘s capacity to personalize and simplify user interactions, provide intuitive guidance and automate complex tasks could allow users to leverage 100% of an application’s capabilities.
This not only enhances the productivity and effectiveness of users but also ensures that the sophisticated functionalities, often overlooked or underused, are brought to the forefront of user experience.
The result is a shift in how software applications are used, from tools with a fraction of their features regularly employed to dynamic platforms where every element is accessible and utilized to its fullest potential.
This comprehensive utilization can significantly improve operational efficiency, decision-making accuracy and overall user satisfaction, ultimately translating into near-term and tangible benefits for businesses and individuals alike.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
How much can a smartwatch really help you to improve your health and fitness?
Most of us are familiar with the perennial problem of knowing you need to be healthier, but not quite understanding how to get there (and staying motivated to do so).
Huawei’s latest research looks at whether a smartwatch has the power to make a difference.
The study , looking at the general attitudes, awareness and fitness habits of 8,000 participants across Europe, reveals an interesting paradox in our health aspirations versus reality.
While 82% of respondents express an interest in physical health, only half are content with their current fitness levels, and a staggering 60% admitted to not having regular health checks.
When asked which health measures they most often monitor, the 5 most common were physical activity (57%), hydration levels (54%), sleep quality and duration (50%), heart rate (51%), and calories burned (45%).
In contrast, critical health indicators like respiratory rate (37%), blood sugar levels (34%), body fat percentage (34%), and ECG readings (26%) are often overlooked.
Yet, there’s one game-changing factor that the study reveals can absolutely play a major role in your health journey: owning a smartwatch.
Smartwatch owners are not just casually glancing at their devices; they are actively tracking vital health metrics.
They monitor heart rate, blood oxygen saturation, and calories nearly three times more diligently than those without a smartwatch.
Additionally, 47% of users say their smartwatch nudges them to exercise more frequently, 41% report an increase in their workout duration, and 28% have altered their diet based on their watch’s recommendations.
Consequently, a compelling 88% of survey respondents believe their smartwatch has elevated their physical health, with 86% affirming an improvement in their overall quality of life.
The conclusion is clear: smartwatches are potent tools for health improvement.
However, choosing the right one, equipped to track essential fitness metrics, is crucial.
Huawei’s smartwatch range is designed to meet these needs, offering a variety of features to support your health and wellness journey.
Huawei has long been at the forefront of health and fitness technology with their wide range of smartwatches.
Alongside their advanced features, and eye-catching, luxe designs, these are also full of impressive health tracking features.
In particular, the flagship HUAWEI Watch 4 which is medically certified to carry out ECG examinations, with a simple one button ‘Health Glance’ feature that checks up to seven core health metrics in just 60 seconds.
Additionally, the HUAWEI Watch D is another medically-certified wearable that comes with ECG analysis and an in-built inflatable band inside the strap to accurately measure your blood pressure.
If you’re looking for style as well as substance, the HUAWEI WATCH GT 4 comes in a range of eye-catching designs, sizes and types of watch faces to suit your tastes.
More importantly, it comes with a number of health and fitness tracking features including a scientific work out coach, and a 24/7 health management system.
And for the more super active and adventurous type, the beautiful HUAWEI WATCH Ultimate comes with specialist expedition and diving features for the keen swimmer.
Beautifully crafted from super-premium materials, the Watch Ultimate is designed to be durable and hard-wearing.
So if you want to make health a top resolution for the new year, let Huawei smartwatches help you keep track of your fitness and well-being.
For more information, check out the full range of Huawei wearables .
Despite the ever-changing tech landscape, DevOps continues to be one of the main drivers of innovation, efficiency and competitive advantage for many organizations.
While processes and tooling differ between industries and organizations, there are key trends, challenges and best practices that shape the ecosystem.
Google’s State of DevOps Report is the gold standard of DevOps research.
Now in its ninth year, it’s the largest and longest-running research tool of its kind.
While Google’s covers a broad span of emerging technologies, cultural concerns and process improvements, it identified three key trends that will continue to guide the DevOps industry in 2024.
Put Users First, But Balance With Team Needs There’s a fundamental truth in software development: serving the user should be the driving force for innovation and improvement.
Building and releasing clever solutions only has value if it’s closely tied to meeting the needs and high expectations of the end users.
But is the best approach to development really to put users first all the time?
Google’s 2023 report considered the performance, delivery and job satisfaction of four different types of teams: User-centric, Feature-driven, Developing and Balanced.
User-centric teams have a clear understanding of their users’ needs, leading to a 40% higher organizational performance than other team types.
While this highlights that delivering for user needs allows organizations to excel in this competitive landscape, the report suggests that user-centric teams experience a higher rate of burnout.
By contrast, the balanced team type, which opts for a more sustainable approach, was found to have the lowest rate of burnout even though its organizational performance was slightly lower than its user-centric counterpart.
So, teams hoping to maximize their cultural, technical and delivery performance should focus on user needs while creating a balanced culture and processes that support their DevOps team.
AI
There’s A Long Road
Ahead It’s been suggested that GPT technologies could help DevOps teams increase productivity by offering solutions to technical issues and extracting important information from documents.
Google’s 2023 report tells a slightly different story.
The overriding message is that right now, adopting AI won’t enhance DevOps productivity overall.
The report found that AI had no effect on team performance and actually had a substantial decrease in operational performance, with software delivery also seeing a minor decrease.
A few positives around the implementation of AI were reported, with burnout seeing a minor decrease, and job satisfaction and productivity both having minor increases.
The report also found enthusiasm for the potential of AI development tools but concluded that it would still be a while before AI-powered tools are widespread throughout the industry.
Google’s team speculated that this mixed picture might center around the AI-tool adoption of enterprises and their tentative testing and use of trials before committing to AI more broadly throughout the business.
Even respondents who reported that AI was “extremely useful” wouldn’t use it to gather user feedback, make decisions or collaborate with teammates—so it remains unlikely that empathy-driven DevOps processes like gathering user feedback and improving culture will benefit much from AI.
However, it was reported that AI will most likely be useful for data-driven behaviors like analyzing data and security.
Teams should also exercise caution with AI.
The importance of security and compliance for DevOps has become increasingly obvious over the past few years, with DevSecOps emerging as an approach that prioritizes security alongside existing development approaches.
Similarly, continuous monitoring and automated security checks have transitioned from being optional add-ons to becoming standard practice.
This transition reduces vulnerabilities and ensures the highest levels of data protection, bolstering trust and further encouraging the adoption of DevOps.
While there could be productivity benefits of AI for data analytics and security, this carries the added concern of introducing a security risk into your processes.
Organizations will need to balance DevSecOps and experimenting with promises of AI efficiency.
Culture Remains Key Google’s State of DevOps Report defines culture according to , which proposes three types of culture: • Generative: Innovative with open communication and collaboration channels; • Bureaucratic: Uses structured and formalized procedures with an emphasis on rules and predictability;
• Pathological: Based on a lack of transparency and open communication.
Google identified generative cultures as producing the highest-performing DevOps teams, as collaboration is encouraged while allowing room for innovation.
The report found that an even spread of workload improves the well-being of the team and wider company, but surprisingly it also showed that oversharing tasks slowed the software delivery pipeline and speed of delivery.
While this could be due to formal processes in bureaucratic cultures blocking specific roles from quickly completing their work, generative cultures also need to make a conscious effort to prioritize delivery over deliberation.
Cultural issues may be exacerbated for DevOps teams working on low-code platforms like Salesforce.
While low-code platforms promote teams with a mixture of technical and non-technical backgrounds and skill sets, over time this can create an imbalance of responsibility and workload.
Technical team members may become overwhelmed by sole responsibility for more technical tasks like QA, while non-technical team members may feel blocked by a need to involve traditional developers in final releases or decision-making.
This can greatly impact performance and lead to poor job satisfaction.
Teams must create a culture that balances collaborative processes with a focus on delivery.
This should be supported by open channels of communication that empower teams to pull in external expertise when needed and give leaders visibility over work if any guidance is needed.
Balance: The Guiding Trend As with all industry analysis, DevOps research can only point to trends across the whole industry.
As your organization will have unique processes and priorities, you mustn’t blindly follow trends that distract you from your real goal—productively solving team and user problems.
As you plan your DevOps approach for 2024, remember to balance the innovation that’s been highlighted by these key trends with your own team’s preferences and the guiding principle of solving user needs.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Apple has unveiled a new research project called HUGS, a generative artificial intelligence tool that can turn a short video clip of a person into a digital avatar in minutes.
This is the latest in several Apple research and product announcements designed to blend the physical and the virtual worlds.
While HUGS has no immediate application, it will likely form part of the Apple Vision mixed-reality ecosystem in the future.
With its Vision Pro headset announcement earlier this year, Apple revealed techniques for creating virtual versions of the wearer to represent them in Facetime calls and meetings.
This is likely an extension, providing full-body avatars without 3D scanning equipment.
HUGS, or Human Gaussian Splats, is a machine learning method that can scan real-world footage of a person, create an avatar from the footage, and place it into a virtual environment.
Apple‘s big breakthrough is in creating the character from as little as 50 frames of video and doing so in about 30 minutes, significantly faster than other methods.
The company claims it can create a “state-of-the-art” quality animated rendering of the human and the scene shown in the video in high-definition and available at 60 frames per second.
Once the human avatar has been created it could be used in other scenes or environments, animated in any way the user wants and even used to create new dance videos.
Imagine you have your Vision Pro headset on (or a cheaper version rumored to be coming next year) and you’re playing a third-person game like GTA 6 or Assassin’s Creed.
Now imagine that instead of the built-in character, or an avatar that has your face but not your body, you have a clone of your full self but in the digital world.
That is already possible but requires a long processing time or expensive cameras.
If HUGS becomes more than a research project, it could be a native feature of visionOS where you upload a video of yourself and it turns you into a character that can be used in a game.
HUGS is unlikely to be available anytime soon.
This is another Apple research paper and while the company might be incorporating aspects of it behind the scenes, it is early stage work.
What we might see at WWDC in 2024 is a form of this technology available for Apple Developers building apps and interfaces for the Vision Pro.
The reality is that for now, Apple’s Digital Personas are likely to be headshots that can sit in the frame for a FaceTime call.
But this is a good indication of where the company is going.
Irvine-based Masimo Corp. Chief Executive Officer Joe Kiani, head of the medical device maker that has put , said he’d be open to settling with the company.
The executive, speaking Tuesday on Bloomberg TV, said the “short answer is yes,” when asked if he’d settle, but he declined to say how much money he’d seek from Apple.
Kiani said he would “work with them to improve their product.”
“They haven’t called,” he said.
“It takes two to tango.”
The International Trade Commission ruled earlier this year that the Apple Watch violates two Masimo patents related to blood-oxygen sensing.
The ITC imposed an import ban on the Ultra 2 and Series 9 models that goes into effect Dec. 25.
The restriction only applies to .
Best Buy Co., Target Corp. and other resellers can continue to offer the products.
But it’s put Apple in the unusual situation of having to pull a big moneymaker off its shelves during the all-important holiday season.
The Apple Watch generated about $17 billion in revenue in the last fiscal year.
“These guys have been caught with their hands in the cookie jar,” Kiani said.
The medical industry veteran said he last spoke to Apple in 2013, when the iPhone maker discussed acquiring his company or hiring him to help with its in-house technology efforts.
Any settlement talks would need to include an “honest dialogue” and an apology, he said.
An Apple spokesperson said that the ruling from the ITC is erroneous and should be reversed.
The company plans to appeal the decision.
Still, Apple is already preparing for the ban.
It announced plans to pull the devices from its e-commerce site Thursday and will do the same at its physical retail stores on Christmas Eve.
Kiani called that move a “stunt” to pressure the Biden administration to veto the order.
The US president has the ability to step in and cancel an ITC injunction.
“This is not an accidental infringement — this is a deliberate taking of our intellectual property,” Kiani said.
“I am glad the world can now see we are the true inventors and creators of these technologies.”
He accused Apple of hiring more than 20 of his engineers — and doubling their salaries in some cases — to get them to work on similar medical technology for its watch.
“Apple could be an example of how to do things right and do things well, and they didn’t have to steal our people,” Kiani said.
“We could have worked with them.”
In a statement, Apple said its teams “work tirelessly to create products and services that empower users with industry-leading health, wellness and safety features.”
“Apple strongly disagrees with the order and is pursuing a range of legal and technical options to ensure that Apple Watch is available to customers,” the Cupertino-based company said.
Apple also has said it believes Masimo began the legal fight to clear the market for its own smartwatch, which the iPhone maker said is a knockoff of its device.
A lawsuit Masimo filed against Apple over the issue ended this year with a federal court jury unable to reach a decision.
Six of the seven jurors sided with Apple in the case.
The tech giant is working on a software update for the Apple Watch that it believes will resolve the ITC dispute, Bloomberg News previously reported.
Kiani pushed back on that idea in the interview Tuesday.
“I don’t think that could work — it shouldn’t — because our patents are not about the software,” he said.
“They are about the hardware with the software.”
Asked if the import ban could be avoided, Kiani said that if Apple manufactured the watch and its components in the US, no such import ban would be possible.
In contrast, he said, Masimo builds its technology in the US.
This year, we’ve seen huge shifts in the e-commerce world.
The opportunities and pitfalls presented by the partial disintegration of the and the mainstream adoption of AI programs have been tempered by a from inflation that has customers spending cautiously.
However, a new year is on the horizon and it’s set to bring its own challenges and opportunities.
Let’s take a look at what to expect from 2024.
AI Goes Mainstream
You might be thinking: hasn’t that already happened?
However, trends suggest that, when it comes to artificial intelligence (AI), we’ve only seen the tip of the iceberg.
AI-powered e-commerce solutions are expected to form a of $16.8 billion by 2030, with a CAGR of 15.7% over the next eight years, while Forrester that ‘AI will permanently upend the agency landscape in 2024.’
The technologies powering this growth include: AI-powered personalization will change marketing and UX in long-lasting ways, generating personalized ads and content, setting dynamic pricing and deals, and deploying AI-powered chatbots.
by McKinsey has demonstrated that this kind of omnichannel personalization can boost retention and revenue by up to 15%, generate cost savings of up to 30%, and increase customer acquisition by up to 5%.
These search engines are what does and doesn’t feature in search queries, and this is only going to happen more often as they gain competence.
This means that SEO might become less dominant compared to other channels.
This will likely impact e-commerce sales overall as customers can more efficiently locate the products they’re looking for.
AI prediction technologies utilize first-party data to streamline UX, steering customers to exactly where they need to be and removing distractions along the way.
Having gained prominence over the past few years, I believe predictive analytics based on first-party data is set to grow in 2024 due to the need to replace previous methods based on invasive tracking.
This can allow retailers to forecast the behavior of their customer base to better manage their online store and offer personalized promotions and recommendations.
E-Commerce Gets Serious About Compliance With authorities showing a for enforcing data privacy requirements in the EU (GDPR), California (CCPA) and other U.S. states continuing to , failure to comply with data protection and privacy requirements is no longer an option for any medium to large e-commerce business.
The past few years have seen an e-commerce boom.
As with any boom, regulation takes a while to catch up, but it finally has.
In 2024, we are likely to see a stronger focus on governance, risk and compliance (GRC) in e-commerce, prompted by high-level litigation against several large companies in late 2023.
Companies will need to pay close attention to the ways they collect and use customer PII, including aspects such as informed consent and effective opt-out from tracking.
Furthermore, certain types of marketing technology, including targeted advertisements that rely on third-party cookies and the sale of third-party information, will need to be reconsidered or abandoned due to this changing privacy landscape.
In short, the Wild West of e-commerce is being cleaned up, and e-commerce sellers must act now to get their shops in order before the sheriffs arrive.
Attribution Models Become More Sophisticated With the dominance of digital channels in recent years, last-touch attribution has become the de-facto standard for many performance marketing teams.
However, tracking is becoming more difficult (which also impacts first-touch and multi-touch attribution).
There is also a growing consensus that these models are too limited in how they measure marketing efforts, fragmenting the complex journeys and myriad factors that lead customers to buy.
confirmed that only 60% of marketers believe they can prove marketing ROI, while only 23% are “very confident” that they are measuring the right KPIs.
In 2024 we are likely to see more brands adopt a broader range of attribution techniques such as media mix modeling and incrementality.
These methods require a larger investment in data and analytics, but when combined with last-touch attribution, they can provide a very accurate means of assessing the efficacy of various marketing streams.
Understanding the full impact of measurable elements, such as browser extensions, can also help retailers gain a better picture of their customer base and store performance.
Retail Media Networks On The Rise Retail media network advertising such as Walmart Sponsored Search and Amazon Ads will play an increased role due to preparations for the end of the third-party cookie in late 2024.
This is expected to form a , with revenue from these networks predicted to surpass that of television by 2028 and reach $126 billion by the end of 2023.
This is partially because these advertisements are relatively futureproof, targeted to individual customers using first-party data based on purchase history, type of device, location and myriad other factors.
Unlike many other forms of advertising space, these adverts target users when they are shopping and are therefore more likely to inhabit ‘buying’ frames of mind.
Furthermore, with controlling 50% of U.S. e-commerce, retailers cannot afford to ignore these channels.
As these marketplaces mature, they are likely to offer more options for data-driven advertising and third-party offerings that can help retailers make smarter decisions within these ecosystems.
A Brave New World
The coming year is almost guaranteed to bring seismic shifts, from the rapid spread and evolution of AI technology to other changes that e-commerce professionals will have to adapt to.
These changes will make some and break others.
My advice is to consider how to effectively embrace AI in a way that makes sense for your organization, utilize first-party data and diversify attribution models.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Businesses are rushing to enhance their products and operations with artificial intelligence (AI), making it .
Generative AI promises business owners a way to bridge talent gaps and reduce employee overhead.
Other like advanced analytics, automation and a highly personalized user experience.
Yet, as exciting as the AI journey may seem, there can be lots of obstacles.
Whether you decide to build an AI-powered service for your customers or optimize your company’s internal processes with the help of AI technologies, here are four crucial challenges I’d like you to think about.
1.
You’ll need quality data—a lot of it.
Every smart solution .
To deliver a competitive AI product that can make accurate predictions and provide reliable insights, you first need to gather lots of relevant, high-quality data.
This means clean and uncorrupted data with no errors or duplicates.
Another important requirement is that your data must be diverse and properly labeled to improve your solution’s accuracy and minimize the risk of introducing bias.
Also, avoid feeding your AI any personally identifiable data to avoid ethical and legal complications.
As highlighted by Funmipe “VF” Olofinladedirect in his , privacy by design is fundamental for building a quality AI product.
There are many ways to ensure the appropriate level of data privacy within your AI system.
One of the most effective approaches so far is anonymization—encrypting or erasing anything that can potentially connect data to an individual person.
At my company, we often help our clients build and improve their datasets, ensuring quality pre-processing of data.
As part of this pre-processing, we thoroughly anonymize personal data to ensure legal compliance and eliminate privacy and security concerns.
2. Start looking for AI talent or grow it yourself.
Businesses in various industries expect new AI products, especially those relying on generative AI models, to .
However, the rising popularity of AI-powered solutions itself creates a high demand for relevant specialists.
Whether you want to create a general-purpose AI service or a niche, industry-tailored solution, you’re going to need a team of AI creators with strong expertise in data science, machine learning and AI development.
Depending on what team you already have in place, you might need to hire new experts, upskill your existing specialists, or do both.
As growing and advancing an in-house team takes time and money, it may be wise to delegate the development, testing, and support of your AI model to an outsourcing company.
Just make sure to check that your outsourcer has relevant experience with your industry and type of AI system.
My company often works with complex projects focused on industries like healthcare and cybersecurity, and we dedicate a lot of resources to advancing the skills of our AI experts.
Your company can and should do the same.
3. Ensure the secure and responsible use of your AI system.
As Gen-Zers shape , companies like Samsung are to prevent leaks of sensitive data.
But does efficiency really have to come at the cost of security?
When building new AI products, you need to think about their security and efficiency at the inside and outside levels.
The inside level is about what the AI model is.
To protect your AI system from possible security risks, using bias-free data and reliable algorithms isn’t enough.
You also need to heavily test your system and employ strong encryption and access management mechanisms.
The outside level is about what your AI model does.
Adversarial users may look for destructive ways to benefit from your model.
Monitoring and moderating the use of AI-powered products, especially at the early stages of their lifecycle, can help detect and prevent such attempts.
Curiously, one possible solution to this problem— —can also be driven by AI.
When building and testing new AI-powered solutions for your clients, it’s important to rely on both internal coding standards and general recommendations from tech leaders like and .
However, we as a community also need to keep working on reducing the use of AI for illegal activities and encourage the growth of .
4. Plan for the integration and future growth of your AI solution.
To create a solution that can stay competitive in the long run, you need to account for things like a , high reliance on legacy systems and continually changing legal requirements for AI solutions.
For your AI product to be compatible with other solutions, you may need to adjust your technology stack, skill set and even workflow.
Therefore, knowing which services and solutions you want your AI system to integrate with can help you better plan your project and prevent you from draining resources.
and improvements is also vital for your solution’s performance and security.
As AI models tend to degrade over time, we advise our clients to periodically fine-tune their AI systems so they can effectively handle new data and tasks.
Rushing straight into adopting AI can be risky.
But when you know and account for possible risks and challenges, you get to leverage the full potential of this promising technology for your business.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
The Energy Vault (NRGV) installation at Rudong, near Shanghai, is the first gravity energy storage Energy Vault (NYSE: NRGV) will license six additional EVx gravity energy storage systems in China just months after starting work on the world’s first GESS facility near Shanghai.
Those of you who follow this column know that (NYSE: NRGV) is designing and building facilities that essentially recreate the physics of the most popular form of energy storage – pumped hydro – without pumps or hydro.[1]
Energy Vault installations use excess renewable energy to lift massive composite blocks; then, when the energy is once again needed on the grid, the blocks are dropped and the kinetic energy from the dropping blocks spins generators that supply electricity to the grid.
The company believes it will be able to achieve a respectable round-trip efficiency (RTE) of over 80% with its current design.
While some armchair engineers have been critical about Energy Vault’s solution, I think there are good reasons—both as well as —to take this company’s technology seriously.
For an overview on the company, please read my To help frame the problem Energy Vault is trying to solve, read my Forbes article entitled The armchair engineers have egg on their face after a succession of positive announcements about commercial traction for the company’s GESS facilities.
Your correspondent was excited to catch up with Energy Vault CEO, Robert Picconi, to discuss the news.
Many of you must have seen the that Energy Vault was beginning the initial phase of commissioning of the world’s first GESS facility near Shanghai.
The facility is sited adjacent to a wind farm and has a 25 MW / 100 MWh capacity (in other words, the facility can provide 25 MW of electricity to the grid for 4 hours at a time).
This maiden facility is being built using Energy Vault EVx design, which uses advances in material sciences technology and locally available raw materials to lower the carbon footprint of the build-out.
This announcement is a big deal—it is the world’s first “dry” pumped hydro storage facility and once completed, will be one of the largest long-term storage facilities on earth.
With the initiation of this project, Chinese policy makers and grid operators are embarking on a bold step by nurturing a technology that allows intermittent renewable power generation to be stored and dispatched on command, displacing fossil fuel generation.
For all the grief China gets in the press for generating carbon dioxide emissions (mainly to build inexpensive products that consumers in the developed world snap up at Wal-Mart or on Amazon) the fact that it is moving quickly to implement the GESS technology is noteworthy, in my opinion.
China has mandated that all renewable energy facilities (i.e., solar and wind farms) must integrate 20% of the nameplate generation capacity’s worth of storage.
This rule contrasts with the situation in the U.S., which has witnessed the rapid construction of large wind and solar farms over the past 20 years but has not mandated that storage be integrated with these sites.
Following on with the news of Energy Vault’s first GESS facility, that six additional EVx facilities will be built in China.
The first EVx project announced is a massive 2GWh facility in Inner Mongolia, and five more—ranging in capacity from 100 MWh to 660 MWh—in the provinces of Hebei, Shanxi, Gansu, Jilin, and Xinjiang.
A screenshot from a slide in Energy Vault’s 3Q23 earnings call showing the locations of each of the The latter five projects are being deployed by China Tianying, Inc. (000035:CH), an environmental engineering company, through Energy Vault’s license and royalty agreement with Atlas Renewable Energy, a U.S. based energy project developer.
Energy Vault will collect a 5% ongoing royalty on Atlas’s Chinese revenues.
Combined, all the plants have a cumulative storage capacity of 3.26 GWh and represent over $1 billion in capital expenditures.
China Tianying’s Chairman, Mr. Yan Shengjun, confirmed that Energy Vault’s innovative GESS facilities are meeting with strong demand in China for several reasons.
Chinese decision-makers appreciate the long economic life of the assets—over 35 years—and the fact that each facility is built with significant local labor and raw material content.
Energy Vault believes that, even though its EVx systems’ maximum RTE is slightly lower than that of lithium-ion battery technology, the very long economic life of the assets reduces the “Levelized Cost of Storage” (LCoS)—in other words, the cost of each unit of storage spread over the facility’s full lifecycle.
I would additionally point out that Energy Vault’s system does not require difficult to source raw materials (e.g., cobalt, lithium, rare earths, etc.)
and that the drastic reduction in the length of supply chains makes GESS solutions attractive from the standpoint of anti-fragility.
As our world deals with the pressing issues of energy storage and grid reliability, we stand on the brink of a new era of responsibility.
Energy Vault is meeting that responsibility with novel technology—illuminating the way forward with its state-of-the-art gravity energy storage technology.
Intelligent investors take note.
NOTES
[1] Energy Vault also has another part of its business that helps industrial facilities and grid operators to design and manage storage systems of different types.
Earlier this month, that it had received a comprehensive, successful due diligence evaluation, commonly referred to in the industry as a “Bankability Report”, of its UL9540 factory-certified, B-VAULT battery energy storage system (BESS) from DNV, a leading independent assurance and risk management provider.
Martha Martins assisted with the research and drafting of this article.
In a challenging economy, businesses and their teams are seeking ways to cut costs, and that includes what they spend on technology.
New tools and systems have opened up unprecedented possibilities for streamlining processes and saving money.
Still, the pursuit of a “comprehensive” tech stack can come with unexpected overspending, especially if an organization’s various departments haven’t been keeping the IT team apprised of new hardware and/or software acquisitions.
Further, to flip a common saying, it can be difficult to see the trees for the forest—centered in their organization’s technology efforts as they are, even tech leaders may be overlooking line items that could be cut in both their team’s budget and the company’s overall technology spend.
Below, 14 members of discuss common technology areas and functions that companies of all sizes and stripes may be spending too much on, as well as what can be done to trim the fat.
1. Look For Redundant Software Subscriptions Regular reassessment and right-sizing your software licenses or subscriptions is a powerful cost-saving strategy.
Start by conducting a thorough audit of your current software usage.
Identify which licenses are actively utilized and which might be redundant.
This will not only help you cut costs, but also ensure that your business is equipped with the tools it truly needs for optimal efficiency.
– , 2. End Local Data Hosting For the last two decades, “cloud first” has been the mantra to cut costs.
Regardless, many companies are still hosting their data locally for security reasons.
They should think again: Today, it is possible to use the benefits of the cloud without ever giving up on data sovereignty or security.
The key is to look for encrypted cloud storage and email solutions, as these combine the two requirements.
– , is an invitation-only community for world-class CIOs, CTOs and technology executives.
3. Drop Outdated Software Development Practices Many companies are clinging to outdated software development practices.
Transitioning to Agile and Scrum methodologies and embracing DevOps and CI/CD processes can lead to more streamlined, efficient operations.
This shift not only enhances team productivity, but also cuts down on prolonged development cycles and reduces resource waste, leading to significant cost savings.
– , 4. Stop Overpaying For Cloud Resources Companies can save money by optimizing their cloud storage.
Often, we pay for more cloud capacity than we actually use.
By regularly checking and adjusting their cloud resources to match their real needs, businesses can cut costs significantly.
This means turning off unused services and scaling down where less power is needed, ensuring we only pay for what we need.
– , 5. Extend The Life Of Data Center Hardware The largest budget item is employee costs, but it is hard to right-size your organization to manage your needs.
The second-largest item is IT infrastructure costs.
To reduce operating and capital expenditures, simply extend the lifespan of your data center hardware—for example, from five years to eight years.
Within 15 years, you will cut the expense of one full hardware lifecycle.
Additionally, switch from maintenance provided by the original equipment manufacturer to third-party maintenance to halve those costs.
– , 6.
Leverage Open-Source Solutions At inception, many companies opt for centralized software solutions.
Those solutions are easy to implement, but they often come with subscription fees.
Instead, move your tech stack to an open-source solution with no ongoing fees.
There will be slightly higher upfront implementation costs, but with your solution already mapped out, lifting and shifting to an open-source resource should be trivial.
– , 7.
Automate Routine Tasks With AI IT and operations can both be significantly streamlined by using artificial intelligence tools to automate routine tasks and optimize system performance.
The use of AI reduces the need for extensive manual intervention, improves efficiency and cuts labor costs, enabling businesses to save budget dollars while more efficiently applying their resources to tasks that require human focus, specificity and expertise.
– , 8.
Balance Office Space And Remote Work Expenses Aside from staffing costs
, office space is an organization’s biggest predictable expense.
In addition, many organizations have made a huge IT investment to support remote work over the past few years.
Companies can now use data to analyze the utilization of existing real estate and technology to make faster, more accurate decisions and reduce costs.
– , 9.
Renegotiate Vendor Contracts Businesses often wait until the end of the contract term to renegotiate software vendor contracts.
It’s critical to remember that a challenging economy affects both sides of the transaction.
Consider renegotiating your terms now—regardless of the renewal date—while offering a longer contract commitment.
You reduce your recurring costs while giving the vendor more certainty over recurring revenue.
– , 10.
Settle For Off-The-Shelf Solutions I believe that a key area for cost reduction is custom software development.
Many businesses overspend on bespoke solutions when off-the-shelf products would suffice.
By auditing software needs and opting for standardized software when possible, companies can significantly cut down on expensive development and maintenance costs without compromising on efficiency.
– , 11.
Adopt BYOD Policies Rather than furnishing company-issued devices to every employee, contemplate adopting “bring your own device” policies.
By implementing such a strategy, employees become responsible for the costs associated with hardware, thus reducing the financial burden on the company for procuring and maintaining devices.
– , 12.
Audit Your Data Holdings Companies often consider only the direct costs of data storage, overlooking other aspects.
Regulated data in ungoverned locations becomes a liability, and excessive data requires a longer time to restore, increasing damages in ransomware scenarios.
Understanding what data you have, where it is and auditing its use allows you to not only trim storage costs, but also reduce risk and liability.
– , 13.
Scale Back Upgrades And Replatforming Initiatives One way to lower tech costs is through the systematic elimination of chronic, unnecessary back-office enterprise software upgrades and replatforming initiatives.
Because of vendors’ periodic uplifts every three to five years in the name of “modernization,” regular software upgrades have become a part of many organizations’ DNA.
These projects, often justified by security and compatibility concerns, are better solved through other, less expensive solutions.
– , 14.
Consolidate Vendor Services Vendor charges from product building, software, cloud management or supporting existing applications or technologies represent a significant portion of IT costs.
Vendor consolidation is a strategy that offers companies a strategic advantage through negotiating and combining vendors’ services for a better pricing model.
It can also help to maintain strategic long-term relationships with fewer contacts.
– ,
Continuing to innovate as a SaaS company requires the ability to balance the art of delivering the tools and technology your customers need today while proactively anticipating and building toward what they will desire in the future.
While the concept of co-innovation isn’t exactly new, the rigor and business practice around it have become much more advanced in the last decade, encouraging organizations to look beyond their company’s four walls to innovate.
Co-innovating with customers directly involves them in product development, making them a stakeholder in the process.
Partnering with customers on innovation is an especially powerful tool for developing products and services that closely align with their needs and future desires.
Engaging customers in product innovation takes the guesswork out of understanding their needs, ensuring that innovation responds to their requirements, addresses their pain points and delivers measurable value.
We recently expanded capabilities within our product suite at Pushpay in order to address the needs of a new business vertical, and co-creating with some of our current customers was an essential part of our development and innovation strategy.
Benefits Of Co-Innovating With Customers As you continue to evolve your offerings, it’s important not to innovate in a vacuum.
Partnering with customers on product development can accelerate innovation and increase product adoption.
That translates to delivering real value to customers faster, which can increase customer satisfaction and boost loyalty.
A by Hitachi found that bringing stakeholders like customers directly into the innovation process yields significant bottom-line benefits.
Over half (52%) of survey respondents said that a co-creative approach to innovation has reduced the cost of developing products and services in their businesses, 51% said that co-creation has improved financial performance and 61% said that co-creation has created new commercial opportunities.
Partnering with customers does require a high degree of coordination.
Getting it right involves bringing the right voices to the table and creating connections that feed into a continuous loop of customer feedback, insight, development tasks, product innovation and customer value.
For startups or mid-sized companies that are looking to put some structure and rigor behind a more mature co-innovation approach, here are four steps to get started: Co-innovating with customers starts with listening.
All of our conversations with customers are focused on understanding their needs and pain points.
Research interviews are continuous and ongoing, framed against a broad set of topics.
They are recorded, cataloged and tagged against customer needs and pain points.
All of that data becomes democratized so that product development teams across our organization are able to access the data, learn from it and adjust their product plans.
From a tactical perspective, customer-centered organizations need to ensure that they are capturing feedback and sharing it in a way that is transparent and easily accessible.
One of the strategies that can help you quickly gain the above insights is through the development of a customer advisory board.
At Pushpay, this board is made up of a small number of our strategic and most influential customers who are critical to our business and growth.
We regularly have bigger-picture conversations with this board about how our technology can help them resolve operational challenges and capitalize on opportunities for growth.
This advisory board has high expectations related to the role their voice should play in the evolution of our product.
They also view their inclusion in our customer advisory board as an indication of how important they are to us as customers.
This group is defined in conjunction with our customer service and sales teams and is used across the business for different discovery efforts.
As an extension to the advisory board, consider drilling down another level by creating product panels, which could represent specific products, services or technology.
As power users, these panel members regularly use a specific area of the software and are more familiar with day-to-day workflows.
Engaging with this group enables internal stakeholders to make more informed decisions about how a company should evolve or innovate to better meet their needs.
Beyond investing in internal resourcing to manage the above customer feedback loops and touchpoints, it’s important to define a group of internal leaders to help prioritize the feedback and ideas before development takes place.
Like any fast-moving tech company, roadmap prioritization (and re-prioritization) conversations are essential.
You can create a product guild to help surface ideas, define product development direction and share about recent product innovations.
They are responsible for helping ensure that roadmap items align with business goals and reflect the current needs and future desires of customers.
This group should include your head of product, engineering, marketing, customer success and more.
Once you are ready for testing and deployment, a healthy pool of alpha and beta customers is essential.
They are your sounding board to ensure that what your team has developed is functional and actually solves the intended problem.
The alpha group is a smaller group that is open to receiving communication daily through the development process.
They have access to an alpha channel, which is essentially a test site that enables two-way communication.
They are part of the innovation process every step of the way and are an essential part of our development team.
When new products or features are more refined, a larger beta group of customers is engaged for testing—which is pretty standard for most technology companies whether you are rolling out something new internally or externally.
Like most companies, we continue to evolve to ensure these programs work to create transformational solutions that solve real problems and create tangible value for our customers.
However, when you have the customer at the center of all of your development decisions, you are forced to remain hyper-focused on creating the right solutions to help them achieve their mission now and into the future.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Physicians and provider organizations are prioritizing practice management solutions that help them Many of us as patients have experienced losing a trusted physician to a new employer, where that physician became part of a new organization that uses different insurance.
This scenario is happening more frequently as waves of consolidation and an unsteady economy are changing how and where patients receive their care, leaving patients scrambling to find a new provider amongst the .
Healthcare has seen significant consolidation among technology companies, insurers, and, critically, among care providers themselves.
The Physicians Advocacy Institute that show hospitals or corporations employ nearly 70% of all physicians, an increase of 12% between 2019 and 2020.
The American Medical Association (AMA) also the number of physicians working in private practices decreased by 13% between 2012 and 2022.
Patients who have been left wondering why their favorite physician moved to another practice should understand the challenges their care providers face and how they’re responding to those challenges.
The forces driving consolidation are many – physician shortages, regulatory pressures, of medical malpractice insurance, an aging population with greater needs – and care providers need to adapt to stay economically viable in a shifting landscape.
Physicians are also receiving lower reimbursements for the care they provide.
For instance, from its Physician Fee Schedule in 2023.
In response, physicians are joining Management Services Organizations (MSOs), or combining forces with other physicians to create their own provider networks.
By pooling resources, physicians have greater negotiating power with Medicare and private insurers, and can earn more competitive reimbursement rates.
The climate has also put an emphasis on efficiency.
Physicians and provider organizations are prioritizing practice management solutions that help them meet the high demands of running a business and eliminate any unnecessary spending.
Patients ultimately reap the benefits of an efficiently run healthcare business, as they will see shorter wait times for appointments, smoother billing processes and a stronger overall patient experience in the clinic.
Consolidation has also helped some physicians continue to practice medicine.
Against the backdrop of a that projects to get worse over the coming years, we cannot afford to lose practicing physicians to other lines of work.
With a challenging economy and tight regulations, physicians and their staff members are looking to technology to help solve inefficiencies with emerging capabilities.
Most notably, those include artificial intelligence-enabled functions that focus on clinic operations.
Various forms of AI have made splashy headlines in recent months for potentially transformative applications.
For instance, AI has immense potential to improve how we diagnose diseases and prescribe therapy.
The White House has acknowledged that potential through steps that establish a framework for AI’s implementation, including on AI safety.
However, the stakes are very high in healthcare, and AI’s status as an unproven technology requires a measured approach to any clinical application.
That is why the best course for AI development must focus on operations.
For instance, physicians have long pointed to as an administrative burden contributing to burnout and taking away valuable time they could spend with patients.
It marks a pain point that desperately needs streamlining.
Technology developers are answering that call with tools such as AI-enabled ambient listening software.
Rather than asking physicians to write clinical notes or hire a scribe, ambient listening can run in the background and take highly detailed notes, allowing physicians to have a natural, uninterrupted conversation with their patients.
The U.S. spends annually on healthcare overhead – a staggering number that should draw more attention than it does.
AI can help reduce that spending by lowering the administrative burden that has driven private practitioners out of business and burdened health organizations with exorbitant operating costs.
By applying AI to less glamorous tasks, such as appointment scheduling, patient reminders and documentation, AI can enhance efficiency, save time, and reduce burnout among physicians and staff.
Despite recent periods of economic uncertainty and a shifting landscape in healthcare, there are reasons for patients to feel optimistic about the future of care delivery.
For one, we live in a golden age of tech innovation – Bill Gates to the advent of the internet.
It will indelibly impact healthcare delivery and open the door for capability we haven’t even considered yet.
Additionally, external forces cannot change the fact that patient care happens within the four walls of a doctor’s office or hospital, and those care providers are still empowered to drive positive patient experiences and outcomes through their own internal efficiencies.
As recently as 2010, for instance, of hospitals had an electronic health record system – that number is just under 100% today and paper charts are all but extinct.
While disruptive technology such as AI will eventually find clinical applications, short-term gains in operations will lower the cost of care and can help to solve the issue patients care about the most – equitable access to affordable care.
Close examination shows the Hermes anti-jam communications unit is made of cheap commercial FPV drones are a signature weapon of the Ukraine conflict.
These racing quadcopters converted into cheap kamikaze drones used by both sides are capable of and bunkers, , and even from five or more miles away.
Both sides have and firing a beam of radio waves to knock out drone communications, said to take out thousands of drones a month.
Now a Russian group claims to have developed a ‘magic radio’ for FPVs which is highly resistant to jamming.
A physicist with the handle took the device apart in a .
“From a technological perspective there is nothing surprising here,” DanielR told Forbes.
But the device does make efficient use of cheap, off-the-shelf components.
Off-The-Shelf Electronic Warfare The first thing DanielR notes is that while Russian Telegram channels hail the as an all-Russian creation, it is made with imported parts.
“The Russians removed the labels from the most important piece, but they need not have bothered,” he notes.
The item is still easily identifiable as a module made by Chinese company RAKwireless, which DanielR calls “an easy-to-use, small-size, low-power solution for long-range wireless data applications,” available in different radio frequency bands online for $5.99.
This is a device for , short for Long Range, a radio communication technology designed for low-power wide area networks by .
LoRa has become a standard building block for Internet of Things applications and the hardware is widely mass-produced.
This makes it attractive for FPV builders, especially given its flexibility.
“The LoRa radios have always been able to operate on different frequencies and this ability has been used throughout the war,” says DanielR. LoRa uses little power and can communicate at up to three miles in urban areas and five miles or more in the open.
Many drone operators now use a repeater, carried on another drone, to extend the reach.
DanielR notes that Hermes’ “magic antenna with filter” looks like a .
Because the size of antenna corresponds to the radio waves it picks up, he estimates that it operates on a frequency of about 930 Mhz.
The antenna is fitted with a (short for “balancing unit”) and a , both of which improve performance.
DanielR says these too are commodity items costing about $.50 and $0.40 respectively.
The device has a 3D printed plastic case and is likely operated with an , a standard component in electronic devices made by a company based in Switzerland, with an LED display.
Hermes charge around $140 for a transmitter unit equipped with their technology, which DanielR says should allow for a healthy profit margin.
Russian companies are notorious for labelling imported devices as “Russian-made” and charging over the odds, like the infamous sold as made in Russia at a markup of around 200%.
Hermes remove the labels from components but these are still easily identifiable.
Cheap
But Effective DanielR says that the technology appears sound.
Like the FPV drones themselves, it shows how off-the-shelf consumer technology now works even in a military environment.
“It seems the military experts were not paying attention to just how good these systems became,” DanielR told Forbes.
“We are seeing the effects of extremely powerful technologies being made available to everyone.”
The open-source nature of the equipment used shows what Russia can do even without supposedly military-grade hardware.
“The Russian use of cheap electronics illustrates that controlling the export of chips and other sensitive electronics to Russia, while important, can only ever have a finite effect,” electronic warfare expert told Forbes.
“Russian ingenuity can come up with solutions to outflank export controls.”
Electronic warfare is a cat-and-mouse game in which each move is met with a counter move as one side attempts to jam communications while the other attempts to sidestep the jamming.
Any advantage tend to be temporary; at the start of 2023 it seemed that Russian jammers were winning, until Ukrainian .
The pendulum is likely to keep swinging.
“The best way to help the Ukrainians neutralize Russian drones, is to get the Ukrainians as much electronic warfare materiel, particularly electronic attack systems, as they need,” says Withington.
“Electronic warfare represents a robust, and comparatively lower cost, capability which can help blunt Russia’s land power.
This is one reason why the current deadlock in the EU and the US is so unhelpful.”
For the meantime, FPV drones on both sides appear to be highly effective even in the face of jammers, presumably thanks to units like the Hermes one.
As David Axe notes, a whole sub-genre .
Sooner or later the jammers will raise their game again, and DanielR says they might aim to attacking the drones’ video signal.
“Video is completely independent of the radio control,” he told Forbes.
“There’s no encryption and channels are limited.
It was meant for hobbyists and is very susceptible to electronic counter measures.
Video is always hard to do.”
The number of FPVs is rising fast.
Russian group Sudoplatov alone .
Control of the electronic spectrum, to be able to jam the enemy’s drones and ensure that yours cannot be jammed, is a vital battleground, with the winner able to destroy the enemy at will without being destroyed.
Software development has always been in hot pursuit of greater efficiency and speed.
One highly effective strategy that emerged from this quest is the utilization of —ready-made building blocks that serve to streamline development significantly by automating routine coding tasks.
This enables developers to focus their efforts on more intricate and unique aspects of their projects demanding customization.
While being integral to traditional software development workflows for a while, this modular approach is now disrupting the advanced domain of blockchain as well.
In traditional software engineering, prepackaged modules like or Angular frameworks completely revolutionized the approach to building applications.
By providing reusable, prewritten code snippets and functions that tackle common needs, they save precious developer time and ensure reliable, tried-and-tested solutions.
For example, instead of coding user authentication mechanisms from scratch, a simple API call to the library can handle it.
The benefits of using prepackaged modules are manifold
1.
They accelerate overall development cycles, as developers can integrate these ready modules into their projects quickly rather than build basic functionalities from the ground up.
With the ability to turn concepts into deployed products faster, this rapid reusability is a competitive edge for companies.
2.
Modules reduce the need for extensive custom coding, which requires more effort hours and drives up costs.
3.
Established modules undergo rigorous community testing and debugging processes, lending to their credibility and stability.
4. Using standardized modules promotes consistency in structure, best practices and compliance across development teams—enabling easier interoperability and maintenance.
While bringing enormous productivity to traditional software processes, blockchain technology poses some unique challenges stemming from its inherent complexity.
By design, blockchains favor decentralization and strong security assurances over convenience.
This manifests in the form of various technical intricacies needed for protocols, encryption, distributed storage and so on.
Building blockchain systems from scratch requires an in-depth understanding of cryptography, peer-to-peer networking, mechanisms like proof-of-work, etc.
Needless to mention, this demands significant initial time and resource investments from development teams.
Before blockchain solutions can target mainstream adoption across industries, this barrier to entry needs to be lowered for companies exploring its capabilities.
To address these challenges for blockchain projects, the concept of software accelerators can be implemented.
Blockchain software accelerators aim to abstract away unnecessary complexity and provide developers with readily available building blocks to efficiently construct their decentralized applications.
They contain sets of prebuilt components, tools and scripts specialized for blockchain app workflows—much like prepackaged traditional modules but tuned for distributed environments.
For example, some providers let developers focus on business logic rather than low-level blockchain intricacies with features like smart contract compilation, deployment automation and client-side code generation.
By hiding away the complex internal details behind simple interfaces, blockchain software accelerators allow the integration of blockchain components without needing deep technology expertise.
This is gradually democratizing access to blockchain development—earlier limited mainly within niche circles of experts—to any software programmer.
In turn, this wider accessibility can accelerate innovation in the decentralization space and drive adoption across more mainstream enterprises and industries.
Transitioning to a blockchain accelerator strategy is a multifaceted process that demands that organizations thoughtfully plan and execute.
It begins with educating and training staff members in blockchain technology, ensuring they understand how blockchain accelerators work and their potential applications within the organization.
This foundational knowledge is critical for successful adoption and implementation.
Simultaneously, organizations need to conduct a thorough assessment of their needs.
This step involves identifying areas where blockchain can add the most value, such as supply chain management or data security enhancements.
Selecting the right blockchain accelerator is crucial and should be based on factors like ease of use, compatibility with existing systems and specific features that align with the organization’s requirements.
A pragmatic approach involves starting with pilot projects, which allows for testing the integration of blockchain accelerators in a controlled environment.
This helps organizations identify potential challenges and understand the impact on current operations.
Integration with existing IT infrastructure is another critical aspect, ensuring that the new blockchain solutions work seamlessly with current systems and workflows.
Organizations should be prepared to face both technical and cultural barriers during this transition.
Technical challenges (e.g., interoperability issues) and cultural resistance to new technologies require a combination of technical solutions and effective change management strategies.
Ensuring regulatory compliance is also vital, as blockchain initiatives must align with relevant laws and regulations.
Once implemented, it’s important to continuously monitor the performance of blockchain accelerators.
This ongoing evaluation helps identify areas for improvement and ensures the strategy remains relevant and effective.
Engaging with blockchain experts or consultants can provide additional insights and guidance, which can be invaluable during the transition process.
Finally, the adoption of blockchain accelerators should be viewed as part of a long-term strategy, not just a one-off project.
This means ongoing investment in technology updates, staff training and process optimization is essential for maintaining the relevance and effectiveness of the blockchain strategy.
By focusing on these areas, organizations can navigate the transition to blockchain accelerators more smoothly, overcoming challenges and leveraging benefits such as enhanced efficiency, improved security and fostering innovation in decentralized applications.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Anyone fretting about the economy should remember that the sky isn’t falling.
Goldman Sachs that the U.S. economy would grow by more than 2% in 2024.
The real challenge in 2024 isn’t necessarily growth.
Rather, it’s the availability of the workforce.
In September 2023, there were for every American seeking one.
Stubbornly high inflation and lingering pandemic impacts are making it nearly impossible and very expensive to fill needed roles.
All of that means that for managers and their organizations in 2024, it’s arguably never been more important to get the best out of the people they have on board.
Just one hitch: In most companies, there’s an enormous gulf when it comes to connecting people with business outcomes.
Organizations have an abundance of data about employees at their disposal, but most managers are left flying blind.
Not only do they lack insight into how their people work best, but they have no actionable intelligence on how to organize, train and lead to generate optimum business outcomes.
As the co-founder of a company that helps businesses boost their efficiency by showing how people drive results, I’ve seen the high cost of failing to bridge the gap.
Luckily, there are ways to help front-line and senior managers make the connections—improving their own efficiency and business performance in the process.
How People Analytics Can Help Managers At All Levels Front-line manager efficiency is rooted partly in “housekeeping”—keeping tabs on individual employees and their performance.
Here, people analytics can be a game-changer.
Sometimes, the lack of insight is in surprisingly basic areas like staffing.
One of our customers, a European bank, had 170 people managing compliance with the General Data Protection Regulation ( ) when two at the top level would have been enough.
The company had no idea until our people analytics software uncovered the problem.
Compensating employees, the elemental task of understanding how much people should be paid and incentivizing them with raises or bonuses is another example.
Smart compensation tools can now help managers apply scarce compensation budgets where it will make the most impact while also taking bias out of the equation.
Managers also need to grasp employee engagement.
Are team members gainfully connected inside the organization, and are they likely to stay?
By collecting the traffic data from chats, calendars, email and other popular workplace applications, new workforce dynamics tools can enable a manager to build a profile of an employee’s engagement and organization efficiency.
For retention and the bottom line, those wins can add up.
A 2004 Corporate Leadership Council found that highly engaged employees were almost 90% less likely to quit.
Meanwhile, companies with highly engaged workforces are profitable and productive.
These kinds of day-to-day insights on individual employees are helpful, but the true promise of people analytics lies at the aggregate level.
At a hospital, for instance, tying care from nursing teams to patient outcomes is important for both patients and profitability.
Using people analytics, senior management can see the availability of nurses by shift and professional training, how many patients have been served and the overall health outcomes.
The right data can also demonstrate the added value that experienced workers bring, letting employers reward them accordingly.
Experienced nurses, for instance, may translate to superior patient outcomes.
Knowing that helps companies avoid a constant revolving door that leads to lower productivity for remaining employees, lost customer relationships and lack of trust.
What’s Stopping Companies From Using People Analytics?
Despite the benefits of people analytics, many companies still hesitate to adopt them.
The blockers tend to be cultural.
Often, businesses balk at giving data to front-line managers, citing privacy concerns.
These objections overlook the built-in safeguards of modern systems.
Access to relevant data can be limited to individualized roles and can be generalized or anonymized so that no sensitive employee information gets revealed.
Proper provisioning ensures that data only gets into the hands of managers who need it.
Companies also often distrust the quality of their data.
It’s hard to blame them, as they might have 30 to 50 different HR and business systems all generating independent streams of disconnected data.
People analytics can mesh these disparate streams together to provide a comprehensive picture of the organization.
For managers, there’s also a lingering tendency to trust their gut when making decisions about people.
It takes discipline to look at the data first, and it’s up to senior management to model that mindset by asking probing questions about the data and the results are what they are.
Although it’s no easy task, companies must strive to create a culture that prizes integrating people insights with business decisions.
For starters, that means sharing people data with decision-makers on a regular basis.
To help managers get comfortable with using that information, leaders should tailor data to specific roles and departments—making it accessible to parse and understand.
They should also make sure the numbers tell a story by putting data in context whenever possible.
When the harnessing of people data delivers a win, talking about that success can motivate others to step up and do the same.
Additionally, this culture shift calls for some myth-busting.
Even though companies consist of people, there’s still a mistaken belief that people data doesn’t contribute to business results.
Leaders must instill the opposite message: By understanding how people work best, organizations can boost efficiency and productivity.
A 2020 Forrester study found that organizations using data to drive decisions are to exceed revenue goals.
In a tight and expensive labor market, companies that can successfully connect people with results will have an advantage over the competition.
By harnessing those insights to focus and engage employees, managers can make smarter decisions for the business and take better care of their teams.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Vice President Kamala Harris will preside over her third National Space Council (NSC) meeting today (Dec. 20), and you may be able to watch it live.
The meeting, held in Washington, D.C., will focus on international partnerships , according to a media advisory the White House issued earlier this month.
The meeting will begin at 2 p.m. EST (1900 GMT) and you’ll be able to watch it live here at Space.com, courtesy of the White House , if a webcast is available.
Related: Presidential visions for space exploration: From Ike to Biden The NSC helps steer and shape American space policy.
It’s made up of several dozen high-ranking government officials, including the secretary of defense, the NASA administrator, and the vice president, who serves as chair.
The first NSC meeting under Harris was held at the United States Institute of Peace in Washington, D.C., on Dec. 1, 2021.
During that conclave, she highlighted some of the major space priorities of the Biden administration, which include using space assets to study and help mitigate climate change and getting a handle on space junk , which is a growing problem in Earth orbit.
Harris’ second NSC meeting took place at NASA’s Johnson Space Center in Houston on Sept. 9, 2022.
During that meeting, she “announced commitments from the U.S. government, private sector companies, education and training providers, and philanthropic organizations to support space-related STEM [science, technology, engineering and math] initiatives to inspire, prepare, and employ the next generation of the space workforce,” White House officials wrote in the media advisory.
“She also highlighted the importance of commercial and international partnerships for space exploration.”
Back Print By Morgan Lee – Associated Press – Wednesday, December 20, 2023 SANTA FE, N.M. —
Like Christmas trees, Santa and reindeer, the poinsettia has long been a ubiquitous symbol of the holiday season in the U.S. and Europe.
But now, nearly 200 years after the plant with the bright crimson leaves was introduced north of the Rio Grande, attention is once again turning to the poinsettia’s origins and the checkered history of its namesake.
Some things to know: The name “poinsettia” comes from the amateur botanist and statesman Joel Roberts Poinsett, who happened upon the plant in 1828 on a side trip during his tenure as the first U.S. minister to a newly independent Mexico.
Poinsett, who was interested in science as well as potential cash crops, sent clippings of the plant to his home in South Carolina, and to a botanist in Philadelphia, who affixed the eponymous name to the plant in gratitude.
A life-size bronze statue of Poinsett still stands in his honor today in downtown Greenville.
While Poinsett is known for introducing the plant to the United States and Europe, its cultivation – under different Indigenous and Spanish language names – dates back to the Aztec empire in Mexico 500 years ago.
Among Nahuatl-speaking communities of Mexico, the plant is known as the cuetlaxochitl (kwet-la-SHO-sheet), meaning “flower that withers.”
It’s an apt description of the thin red leaves on wild varieties of the plant that grow to heights above 10 feet (3 meters).
Year-end holiday markets in Latin America brim with the potted plant known in Spanish as the “flor de Nochebuena,” or “flower of Christmas Eve,” which is entwined with celebrations of the night before Christmas.
The “Nochebuena” name is traced to early Franciscan friars who arrived from Spain in the 16th century.
Spaniards once called it “scarlet cloth.”
Additional nicknames abound: “Santa Catarina” in Mexico, “estrella federal,” or “federal star” in Argentina and “penacho de Incan,” or “headdress” in Peru.
Ascribed in the 19th century, the Latin name, Euphorbia pulcherrima, means “the most beautiful” of a diverse genus with a milky sap of latex.
Most ordinary people in Mexico never say “poinsettia” and don’t talk about Poinsett, according to Laura Trejo, a Mexican biologist who is leading studies on the genetic history of the U.S. poinsettia.
“I feel like it’s only the historians, the diplomats and, well, the politicians who know the history of Poinsett,” Trejo said.
Not long after Poinsett brought the flower to the U.S., interest spread quickly in the vibrant, star-shaped bloom that – in a dose of Christmas cheer – flourished with the approach of winter as daylight waned.
Demand spread to Europe.
The 20th century brought with it industrial production of poinsettias amid crafty horticulture and Hollywood marketing by father-son nurserymen at the Ecke Ranch in Southern California.
For his part, Poinsett was cast out of Mexico within a year of his discovery, having earned a local reputation for intrusive political maneuvering that extended to a network of secretive masonic lodges and schemes to contain British influence.
Mexican biologists in recent years have traced the genetic stock of U.S. poinsettia plants to a wild variant in the Pacific coastal state of Guerrero, verifying lore about Poinsett’s pivotal encounter there.
The scientists also are researching a rich, untapped diversity of other wild variants, in efforts that may help guard against poaching of plants and theft of genetic information.
The flower still grows in the wild along Mexico’s Pacific Coast and into parts of Central America as far as Costa Rica.
Trejo, of the National Council of Science and Technology in the central state of Tlaxcala, said some informal outdoor markets still sell the “sun cuetlaxochitl” that resemble wild varieties, alongside modern patented varieties.
In her field research travels, Trejo regularly runs across households that conserve ancient traditions associated with the flower.
“It’s clear to us that this plant, since the pre-Hispanic era, is a ceremonial plant, an offering, because it’s still in our culture, in the interior of the county, to cut the flowers and take them to the altars,” she said in Spanish.
“And this is primarily associated with the maternal goddesses: with Coatlicue, Tonantzin and now with the Virgin Mary.”
The “poinsettia” name may be losing some of its luster in the United States as more people learn of its namesake’s complicated history.
Unvarnished published accounts reveal Poinsett as a disruptive advocate for business interests abroad, a slaveholder on a rice plantation in the U.S., and a secretary of war who helped oversee the forced removal of Native Americans, including the westward relocation of Cherokee populations to Oklahoma known as the “Trail of Tears.”
In a new biography titled “Flowers, Guns and Money,” historian Lindsay Schakenbach Regele describes the cosmopolitan Poinsett as a political and economic pragmatist who conspired with a Chilean independence leader and colluded with British bankers in Mexico.
Though he was a slaveowner, he opposed secession, and he didn’t live to see the Civil War.
Schakenbach Regele renders tough judgment on Poinsett’s treatment of and regard for Indigenous peoples.
“Because Poinsett belonged to learned societies, contributed to botanists’ collections, and purchased art from Europe, he could more readily justify the expulsion of Natives from their homes,” she writes.
The cuetaxochitl name for the flower is winning over some new enthusiasts among Mexican youths, including the diaspora in the U.S., according to Elena Jackson Albarrán, a professor of Mexican history and global and intercultural studies, also at Miami University in Oxford, Ohio.
“I’ve seen a trend towards people openly saying, ‘Don’t call this flower either poinsettia or Nochebuena.
It’s cuetlaxochitl,’” said Jackson Albarrán.
“There’s going to be a big cohort of people who are like, ‘Who cares?’”
Amid disputes over what to call the plant, Poinsett’s legacy as an explorer and collector still looms large, as 1,800 meticulously tended poinsettias are delivered in November and December from greenhouses in Maryland to a long list of museums in Washington, D.C., affiliated with the Smithsonian Institution.
A “pink-champagne” cultivar adorns the National Portrait Gallery this year.
Poinsett’s name may also live on for his connection to other areas of U.S. culture.
He advocated for the establishment of a national science museum, and in part due to his efforts, a fortune bequeathed by British scientist James Smithson was used to underwrite the creation of the Smithsonian Institution.
Copyright © 2023
The Washington Times, LLC.
Please read our comment policy before commenting.
Click to Read More and View Comments Click to Hide
It’s been 35 years since I graduated from college and first stepped foot into the office at , Steve Jobs’ company.
My first role was in computer science, but I always joke that I was the guy who got coffee for the guy who got coffee for Steve.
Those first few years were the most formative of my entire career and helped prepare me for leadership roles at tech companies like Gluecode, Apigee and now DataStax.
The journey has been awesome so far, filled with incredible team wins, more mistakes than I can count and tons of valuable learnings.
In this article, I’ll share three of the most impactful lessons I’ve learned through decades of tech leadership.
I hope these stories help you on your journey or, at the very least, remind you that on the other side of failure is always growth.
The best “no’s” propel you.
Rejection is an uncomfortable reality for most, but it’s in these moments that true growth happens.
Paradoxically, sometimes the best word you can hear is “no.”
In business, a timely “no” has the power to propel you in a different—sometimes even better—direction.
I learned this lesson at Apigee when pursuing a partnership with a major tech company.
After meeting with a well-known leader at the company, they called me and said, “Chet, you don’t want to do this.”
The reason was simple: Our proposal didn’t align with their priorities.
In fact, it wasn’t even on their radar.
That “no” stung, but it also gave us the ability to think differently about how to launch our API business.
Instead of the legacy middleware approach, we were able to create a whole new market and space—with the right team, at the right time.
Without that rejection, who knows what Apigee would have ended up doing.
Instead, we delivered more value to our customers fast, ultimately leading to a public offering and $625 million acquisition by Google.
When you change, change hard and fast.
If you’ve ever worked in technology or at a startup, you know that mastering the art of the pivot is crucial.
Success lies not only in the act of change but how quickly and decisively you execute the pivot.
In other words, when you decide to shift your approach, do it hard and fast.
Not every pivot will change the direction of the entire organization (in fact, most shouldn’t)—but every change you make has to be “all in.”
Think about the speed of the generative AI market today.
Things are changing daily.
You don’t have to jump on every new trend right away.
But when you pick a lane of focus or find something that aligns with your strategy, do it hard and do it fast.
We did this at my current organization with vector search and retrieval augmented generation (RAG).
In just eight weeks, we moved the entire organization to refocus on vector, launched a product and changed our go-to-market and messaging.
We are an AI company—not only because the market is hot, but because we found a specific niche to go all-in on.
The point is: The quicker you can make a change culturally, operationally and strategically—and commit to it fully—the better off you’ll be.
Even if you fail, at least you fail fast.
When there’s a doubt, there’s no doubt.
One of the biggest regrets throughout my career (and many leaders I have spoken to feel the same) has been delaying people-related decisions.
To put it plainly, letting someone go is hard.
We don’t hire people to let them go, yet every leader has to make these important decisions for their company.
There are two common reasons an individual is not a fit for a company: These cases are pretty straightforward.
Let the person know where they’re missing the mark and provide them with the resources they need to improve.
They either do better or they don’t.
A job isn’t just about what you do, it’s also about how you do it.
Someone might deliver on “the what” (i.e., deliver results) but not “the how” (i.e., their behaviors don’t align with company values).
These situations aren’t as cut-and-dry, but culture issues can’t be ignored—they have a massive impact on the overall work environment.
The thought of letting someone go can be uncomfortable and even immobilizing, but being a leader means walking through discomfort.
My advice is this: If you have a doubt about someone, let them go as soon as possible.
It comes down to respecting them and making the decision quickly.
Conclusion These three learnings have been my compass in the evolving world of tech, where adaptability and resilience are key to success.
Keep them in mind as you navigate your path.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Welcome to the digital age, where your company’s domain is more than just a web address—it’s the cornerstone of your digital identity and a gateway to global opportunities.
But with great potential comes great risk.
Unseen threats lurk in the shadows of the digital world, ready to undermine your brand and jeopardize customer trust.
Although often overlooked, domain risk management (DRM) has become a crucial component of modern business strategy to help overcome these challenges.
Imagine a customer seeking out your website only to find a counterfeit site damaging your reputation.
Worse yet, a cyberattack may disrupt your service, eroding customer confidence.
These scenarios are not mere possibilities; they’re the stark realities of today’s digital ecosystem.
In this article, we dive deep into DRM, uncovering its role in safeguarding digital presence and unlocking growth.
We will explore practical examples and actionable strategies that not only help ensure survival but also drive success in the digital wilderness.
Crafting Robust Domain Risk Management Effective DRM has become crucial in the digital era, extending beyond IT security to encompass organizational, technical and strategic aspects.
It involves identifying and mitigating risks such as cybersquatting and phishing, establishing strong organizational policies and procedures and fostering cross-departmental collaboration.
A comprehensive DRM strategy safeguards a company’s digital identity, enhances its brand trust and unlocks new opportunities.
In implementing a successful DRM program, businesses should develop a comprehensive strategy aligned with their overall goals, create clear domain management policies, utilize advanced technological tools and engage in continuous employee training.
Ongoing monitoring and adaptation are essential to respond to evolving digital landscapes and maintain the effectiveness of DRM strategies.
Strategic Opportunities Through Effective Domain Risk Management Effective DRM extends beyond risk reduction, offering strategic opportunities for businesses to optimize their digital presence and unlock growth potential.
It can contribute to value creation in the following ways: A secure and well-managed domain enhances customer and partner trust, reinforcing brand strength by regular domain security checks and trust-enhancing measures like SSL certificates help bolster brand presence.
Effective domain management ensures email deliverability and protects company reputation, reducing risks like spam blocking and phishing attacks by implementing DMARC, SPF and DKIM to authenticate emails and regularly monitoring domain reputation to maintain effective communication channels.
Proactively managing and minimizing risks in domain portfolios can help secure competitive advantages through early domain registrations, active domain market monitoring and swift responses to new trends and challenges.
DRM not only mitigates risks but helps strengthen online presence, build trust in the brand and open up new business opportunities.
Implementing A Successful Domain Risk Management Program Establishing a successful DRM program requires an emphasis on practical guidance and best practices.
Let’s explore what this looks like.
Define the goals and parameters of the DRM program, aligning with the overall business strategy and specific company needs.
Involve key stakeholders across IT, legal, marketing and executive teams for a holistic approach.
Create clear guidelines for domain registration, management and monitoring.
Set processes for handling domain-related risks and incidents.
Ensure policies are flexible for future changes, with regular reviews and updates.
Use tools for domain monitoring, risk analysis and management.
Integrate security mechanisms like DNSSEC, DMARC, SPF and DKIM.
Select tools that offer comprehensive views of the domain portfolio and easily integrate with existing systems.
Conduct regular training and awareness programs for employees about the importance of DRM and its common risks.
Also, tailor training programs to different roles and responsibilities within the company.
Establish processes for the ongoing monitoring of the domain portfolio and rapid response to new risks and trends.
Set up feedback loops and regular assessments to keep DRM strategies and actions up to date.
Continuous Improvement In Domain Risk Management Effective domain risk management is an ongoing process that evolves with the digital landscape.
Let’s explore the path of continual improvement in DRM programs and how it can yield long-term benefits.
A strong DRM strategy fortifies a company’s digital security and brand integrity if you regularly revise and adapt the strategy to align with technological advancements and market changes.
To better reduce vulnerability to cyberattacks and other digital risks, research and implement advanced security technologies and continuously train staff in risk recognition and mitigation.
To ensure your DRM program opens doors to new business opportunities and market insights, utilize DRM data and analytics to identify market opportunities and develop new business strategies.
Continually assess the ROI of DRM initiatives and refine strategies for maximum value creation.
This can help enhance the overall value of a company by securing and optimizing its digital presence.
Foster an open culture of learning where employees are encouraged to actively contribute to enhancing cybersecurity practices.
This can raise the overall awareness of cybersecurity and data protection.
In this way, strategically investing in robust DRM can help meet critical security needs and lay the groundwork for long-term growth.
Future Outlook On Domain Risk Management With increasing digitalization and the rise of the Internet of Things (IoT), DRM’s importance in safeguarding company assets and data grows.
Emerging technologies like AI and machine learning will play significant roles in advancing DRM strategies, especially in risk monitoring and analysis.
DRM will continue to evolve, supporting not just security but also business development and innovation.
Domain risk management requires a holistic approach that extends beyond traditional IT security, focusing on comprehensive risk management, continuous innovation and cultural integration.
By strategically approaching DRM as detailed above, businesses can better safeguard their digital assets and ensure long-term success in a rapidly evolving digital world.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
It’s truly your last chance to shop holiday deals at Target.
Their holiday delivery guarantee ends today (December 20), so make sure to get your orders in before the cut-off.
If you want a new Apple Watch, you need to act soon.
They’re being pulled from stores this week over a patent dispute .
Luckily, one of the affected models has been slashed in price in Target’s holiday sale — that’s the Apple Watch 9 for $329 at Target ($70 off.)
You can also score Asics deals from $4 and PS5 games from $19 .
Keep scrolling for my favorite last-minute Target holiday deals.
For more, check out the best deals in Best Buy’s flash sale .
Asics sale: deals from $4 @ Target Target is offering a massive sale on Asics apparel for men, women, and children.
After discount, prices start as low as $4.
The sale includes shoes, socks, hoodies, athletic apparel, and more.
Note that some styles can also be found in Amazon’s Asics sale.
Price check: deals from $9 @ Amazon Holiday decor: deals from $3 @ Target Give your holiday tree a little sass with one of Target’s discounted holiday ornaments.
Pictured is the Bloody Mary Christmas Tree Ornament for $5 .
The sale also includes traditional ornaments as well as ceramic and glass decor.
Switch games: deals from $19 @ Target From Luigi’s Mansion 3 to Super Mario Odyssey, Best Buy is taking from $10 to $20 off a small selection of games.
(Most games are $20 off).
The sale also includes titles such as Mario Kart 8, Mario + Rabbids: Sparks of Hope, and more.
Note that Amazon and Best Buy are offering similar sales, but with slightly different titles.
Price check: from $19 @ Amazon | from $19 @
Best Buy PS5 games/accessories: deals from $19 @ Target 50% off!
As part of its holiday sale, Target has select PS5 games and accessories on sale from $19.
The sale includes titles such as Madden 24, Star Wars Jedi: Survivor, and more.
Asics Packable Running Jacket: was $55 now $24 @ Target Save on the packable jacket for runners.
Lightweight and quick-drying, the highly functional Asics jacket is perfect for on-the-move outdoor workouts and packs away neatly into its own pocket.
Roku Streaming Stick: was $49 now $39 @ Target The best Roku for most people, the Roku Streaming Stick 4K is a great upgrade for your TV.
It offers an intuitive interface, long Wi-Fi range and Dolby Vision support for the best possible picture.
Plus, you can get built-in voice search for finding shows and movies to watch.
Price check: $39 @
Amazon Sur La Table 4-in-1 Air Fryer: was $79 now $44 @ Target
The Sur La Table 4-in-1 Air Fryer lets you fry, bake, roast, and broil all at the touch of a button.
It has 8 built-in presets, so it can make a home chef out of anyone.
The large window in front of the 5-quart basket allows you to keep tabs on your meal throughout the cooking process.
Price check: $44 @ Amazon Asics Gel-Contend 7 running shoe: was $65 now $49 @ Target Using GEL Technology for brilliant shock absorption and cushioning, these shoes feel plush underfoot.
The deal comes in a range of colorways and sizes, so be sure to check what you need for the relevant discount before purchasing.
Ring Video Doorbell: was $99 now $54 @ Target
The wireless Ring Video Doorbell comes with 1080p video recording, motion detection, and night vision.
It’s also got a rechargeable battery and can be installed without much hassle.
In our Ring Video doorbell (2nd gen) review , we called it the best video doorbell you can get for under $100.
Shark NV360 Navigator Lift-Away Deluxe Vacuum: was $199 now $159 @ Target This multi-functional vacuum can transform from an upright to a handheld in no time.
It’s powerful enough to tackle carpets and bare floors and superb for pet hair pick-up.
Its swivel steering makes it easy to maneuver in and out of tight spaces, in corners, around furniture.
Price check: $159 @ Amazon Apple AirPods Pro 2 USB-C (2023): was $249 now $199 @ Target
The new USB-C version of the AirPods Pro 2 have the same H2 chip to provide 2x more noise cancellation than their predecessors, plus they support Apple‘s new lossless audio protocol that will debut with the Vision Pro mixed-reality headset in 2024.
They also offer Personalized Spatial Audio with dynamic head tracking for a more immersive audio experience.
Price check: $199 @ Best Buy | $199 @ Amazon TCL 50″ S4 S-Class 4K TV: was $299 now $249 @
Target
The S4 S-Class is one of TCL’s new budget TVs.
Yet despite its budget friendly price, it packs Dolby Vision/HDR10/HLG support, DTS Virtual:X audio, built-in Chromecast, and Google TV Smart OS.
You also get three HDMI ports, including one with eARC support.
Price check: $248 @
Amazon Apple Watch 9 (GPS/41mm): was $399 now $329 @ Target Lowest price!
Apple Watch 9 deals are live and Target is taking 18% off the newest Apple Watch.
The new watch features a faster S9 chip for better performance, 4-core neural engine, and an 18-hour battery life.
It also supports Apple Double Tap, a new gesture that can be used to answer/end a call, stop a timer, play/pause music, or dismiss an alarm.
In our Apple Watch 9 review , we said the Editor’s Choice watch got significant performance upgrades and remains the best smartwatch you can buy.
Price check: $329 @ Walmart | $329 @ Amazon Apple Watch Ultra: was $799 now $639 @ Target Save $160 on the Apple Watch Ultra GPS/Cellular smartwatch.
It features a rugged, titanium case with up to 36 hours on a single charge.
Designed for the great outdoors, it features a Wayfinder display with a compass in the dial and Waypoint marking.
It can be customized for mountain or trail and with night mode, allows for easy visibility in the dark.
LG C3 42″ 4K OLED: was $1,199 now $899 @ Target Released in 2023, the LG C3 is one of the best mid-tier OLED TVs you can buy.
In our LG OLED C3 review , we said the Editor’s Choice TV delivers perfect blacks, thrilling contrast, and rich, accurate colors at every point across the visual spectrum.
It’s also perfect for gamers with a suite of Game Optimizer features and a 120Hz refresh rate.
It offers Dolby Vision/HDR 10/HLG support, four HDMI 2.1 ports, built-in Amazon Alexa, Google Assistant/Apple HomeKit support and LG’s Magic Remote.
Note that Amazon has it for just a few bucks less.
Price check: $896 @
Amazon | $899 @
Best Buy
NASA just combined all of my favorite things: space, lasers, and cats.
In a first-of-its-kind demonstration of deep space optical communication, the space agency streamed a high-definition video from 19 million miles away from Earth.
And as it turns out, NASA is just as obsessed as we are when it comes to sharing cat videos.
On December 11, a gold-capped laser transceiver attached to NASA’s asteroid probe Psyche beamed a 15-second-long video of an orange tabby cat named Taters chasing a laser pointer up and down a couch, the space agency this week.
The feline live stream broke the record for the longest distance covered by data-encoded laser beams—80 times the distance between Earth and the Moon—as NASA prepares to upgrade its communication skills for deep space missions.
The star of the video is actually the pet of a NASA employee.
Footage of Taters is overlayed with graphics that illustrate several features from the technology demonstration, such as Psyche’s orbital path, and technical information about the laser and its data bit rate.
It also displays more information about Taters, including its heart rate, color, and breed.
The video is not only so adorable that I could DIE, but it also demonstrated NASA’s ability to transmit data encoded in lasers from farther distances within deep space.
We couldn’t think of a better video example to serve as the first high-definition stream sent via lasers from deep space.
NASA’s
Deep Space Optical Communications (DSOC) experiment launched on board the as the first demonstration of laser, or optical, communications from as far away as Mars.
In November, the and beamed data encoded within a near-infrared laser from nearly 10 million miles away from Earth.
For its latest demonstration, the laser transceiver beamed its encoded near-infrared laser to the Hale Telescope in San Diego County, California, at a maximum bit rate of 267 megabits per second.
The video took 101 seconds to reach Earth, and each frame from the looping video was streamed live to NASA’s Jet Propulsion Laboratory (JPL) in Southern California, where the footage of Taters’ laser-chasing adventures played out in real time.
“Despite transmitting from millions of miles away, it was able to send the video faster than most broadband internet connections,” Ryan Rogalin, the project’s receiver electronics lead at JPL, said in a statement.
“In fact, after receiving the video at Palomar, it was sent to JPL over the internet, and that connection was slower than the signal coming from deep space.”
Optical communication systems pack data into the oscillations of light waves in lasers, encoding a message into an optical signal that is carried to a receiver through infrared beams that the human eye can’t see.
Although lasers have been used to and the Moon, the recent test marks the farthest distance covered by the laser beams.
“JPL’s DesignLab did an amazing job helping us showcase this technology—everyone loves Taters,” Rogalin added.
NASA normally uses radio waves to communicate with its missions that are located beyond the Moon, but near-infrared light packs data into significantly tighter waves, which allows for more data to be sent and received.
The DSOC experiment aims to demonstrate data transmission rates 10 to 100 times greater than current radio frequency systems used by spacecraft today, according to NASA.
Optical communications does become more challenging over longer distances as it requires extreme precision to point the laser beam.
“When we achieved first light, we were excited, but also cautious.
This is a new technology, and we are experimenting with how it works,” Ken Andrews, project flight operations lead at JPL, said in a statement.
“But now, with the help of our Psyche colleagues, we are getting used to working with the system and can lock onto the spacecraft and ground terminals for longer than we could previously.
We are learning something new during each checkout.”
The primary purpose of the Psyche spacecraft is to explore and study the unique metallic asteroid Psyche, providing insights into the history of planet formation and core dynamics.
The farther away Psyche travels on its way to its asteroid target, the fainter the laser photon signal will become.
Although the task is set to become more challenging, the team behind the experiment is still keen on having some fun with it.
“One of the goals is to demonstrate the ability to transmit broadband video across millions of miles.
Nothing on Psyche generates video data, so we usually send packets of randomly generated test data,” Bill Klipstein, the tech demo’s project manager at JPL, said in a statement.
“But to make this significant event more memorable, we decided to work with designers at JPL to create a fun video, which captures the essence of the demo as part of the Psyche mission.”
This photo taken on June 2, 2019 shows buildings at the Artux City Vocational Skills Education All countries claim that they have outlawed slavery.
But slavery still exists.
Today we refer to slavery as “forced labor.”
Forced labor can be imposed by State authorities, by private enterprises, or by individuals.
It is observed in all types of economic activity and exists in every country.
But a particularly pernicious form of slavery occurs in China.
The People’s Republic of China has arbitrarily detained more than one million Uyghurs and other mostly Muslim minorities in China’s far western Xinjiang Uyghur Autonomous Region.
The US Department of Labor that 100,000 Uyghurs and other ethnic minority ex-detainees in China may be working in conditions of forced labor following detention in re-education camps.
It is commonly believed that Europe leads the US in environmental, social, and governance legislation and enforcement.
However, no nation’s enforcement of any ESG issue comes close to the US Customs and Border Protection Agency’s enforcement of the Uyghur Forced Labor Protection Act.
Brian Carelli, vice president of sustainability and partnerships at Infor Nexus commented that “the US surprisingly leapfrogged Europe very quickly with UFLPA.”
Infor Nexus provides a solution that connects physical and financial supply chains and provides end-to-end visibility and supply chain agility.
CBP statistics show that more than half a billion dollars of goods presumed to have been made by Uyghur slave labor have been detained and prohibited from entering the country since active enforcement of the UFLPA law began on June 21 of last year.
at the US border since enforcement began.
Of these, 2,583 were denied entry into the US.
The value of the shipments detained was over half a billion dollars.
The detentions have occurred across a variety of industries.
But electronics; industrial and manufactured; and apparel, footwear, and textiles make up the bulk of products detained.
But forced labor is even being used in the Chinese sea fleet and seafood detentions are occurring as well.
Surprisingly, most of the goods seized did not originate in China but rather in Malaysia and Vietnam.
In the case of goods detained from Malaysia, manufacturers in those countries were producing goods often made with extractive ores that were mined and processed in Xinjiang.
In the case of Vietnam, it was more likely Vietnamese were making apparel with cotton grown or processed into fabric in Xinjiang.
Outside of the U.S., several countries have enacted legislation to ban products made with forced labor.
However, these regulations don’t have the same bite as the US law.
What gives the US act teeth, according to Ethan Wooley an executive at Kharon, is that “the ‘rebuttable presumption’ part of UFLPA is truly unique.”
Anything coming out of Xinjiang is presumed to have used forced labor unless you can prove the negative.”
There is also a lack of a de minimis exception; this means that even an insignificant input of product produced in whole or in part with forced labor could result in enforcement action.
According to Kharon, the attorneys who have worked on UFLPA, and were willing to speak publicly on it, are not aware of any company that has overcome the rebuttable presumption.
Kharon sells a global risk analytics platform being to aid in UFLPA enforcement and by importers who want to make sure they are not unwittingly buying goods mined, produced, or manufactured with slave labor from Xinjiang.
The UFLPA sounds straightforward.
The law directs the Forced Labor Enforcement Task Force to develop a strategy for supporting the enforcement of the prohibition of the importation of goods into the United States manufactured wholly or in part with forced labor in the People’s Republic of China, especially in Xinjiang.
However, experts on the law explain that there are myriad complexities associated with UFLPA.
First, according to Jackson Wood the director of global trade intelligence at Descartes Systems Group, “the CBP is getting more sophisticated in their pre-entry due diligence” and their ability to flag shipments that may be higher risk.”
In addition to the Kharon technology, they are using an AI-based solution from Altana to identify bad actors.
Descartes is a leading provider of global trade intelligence.
However, CBP does not just depend on technology.
The Forced Labor Enforcement Task Force, established by the UFLPA Act, proactively targets commodities and subregions in Jinjiang where they believe more intelligence is needed to learn more about the forced labor issue.
This advisory board, in turn, works with NGOs.
“The CBP has relied on nongovernment organizations like human rights institutions, labor, and sustainability organizations who have a presence in these high-risk areas” according to Mr. Wood.
It can be dangerous to make these reports to NGOs.
But some NGOs have “volunteers and former diplomats who know how to navigate through unique circumstances in sensitive jurisdictions.”
Secondly, companies find it very difficult to gain visibility to their end-to-end supply chain.
A buyer knows who they are buying from.
But they usually don’t know their supplier’s supplier or their supplier’s supplier’s supplier.
In the apparel supply chain, for example, it is said that can trace their value chains and over half can’t even track their supplier’s supplier.
The third complication is that even if the importer of record can trace their extended supply chain, they then must prove they are not working with bad actors.
“The importer of record for the shipment will get a detention notice from CBP,” Mr. Wood from Descartes explained.
“The burden is then on the importer to provide their paper trail on their due diligence and reasonable care.
This allows the importer to say, ‘Here’s why we are confident that there is not a forced labor component to that shipment.’”
That proof may come in the form of certificates from trusted industry groups, it can be a chemical analysis that shows the agricultural product did not originate in Jinjiang, or attestations from all upstream suppliers that they’ve done due diligence and that to the best of their knowledge, there’s no forced labor in the involved in the production of their goods.
In effect, Mr. Woods explains, the importer is “using diverse data points to build a multifaceted argument” for the CBP not to detain their shipment.
The fourth complication is that in addition to the rebuttable presumption hurdle, shippers now only have 30 days once their goods are detained to prove they have a valid supply chain.
One of Infor Nexus’s customers did a practice drill.
It took them well over a month to collect the documentation they would have needed to get a shipment out of detention.
If a company can’t prove the validity of the supply chain in 30 days, they may be allowed to reexport the goods to another nation.
Or the CBP may choose to destroy the goods.
Another complication is that while there is a naughty and nice list – a denied parties list – telling buyers which companies should not be part of their supply chain, that list is far from fully inclusive.
Currently, 33 companies are on the list because of a known connection to Uyghur forced labor.
That list is continuing to grow.
But just because a company somewhere up in a company’s supply chain is not on the list does not prevent the CBP from detaining imported goods.
“The CPB is doing a really good job of being transparent about the (detention) statistics,” Mr. Wood explained, “but they’re not going to tell you everything that they’re looking for” because that’s going to allow bad actors to flout the regulations.
Fourth, enforcement is not likely to ease over time.
This act has very strong bipartisan support.
“Congress has made it clear that enforcement of UFLPA is a priority for CDP,” Mr. Wooley of Kharon stated.
“Congress has granted them funding to add more agents and invest in technology.”
Republicans and Democrats don’t agree on much.
About the only things they do agree on are that China is our nation’s primary threat, that they have engaged in unfair trade, and that reliance on slave labor puts goods produced in America at a competitive disadvantage.
The Covid-19 pandemic forced many companies to seek help through outsourcing to adapt quickly to unprecedented challenges.
Even during economic downturns, outsourcing is still a viable solution.
That’s because companies face an expanding scope of work with a diminished workforce due to massive layoffs.
However, the key question remains: Why do some companies succeed in collaborating with outsourced software development teams while others fail?
According to the , only 10% of organizations prioritize strategic partnerships with third-party service providers, which potentially causes the project’s failure.
The Main Cooperative Models Of In-House And Outsourced Teams Managing in-house and outsourced teams involves different expectations and dynamics.
Outsourced teams usually offer more flexibility in terms of deadlines and communication.
However, recognizing different priorities is crucial.
An agency may prefer long-term, costly cooperation with decision-making autonomy, while you may need swift, cost-effective collaboration.
The way you engage with your outsourced teams depends ultimately on your principles and priorities.
Due to these approach variations, several types of collaboration with outsourced teams have emerged.
Project tasks are completed within set deadlines, based on fixed prices or time and material terms.
Choosing the right contractor is crucial in this model to avoid the “Death Valley” scenario.
Success here lies in engaging a vendor with solid project management skills and the ability to handle client expectations.
Other critical factors include a proactive approach to requirement gathering and excellent technical expertise.
•
Ensure detailed requirements with visuals and realistic expectations for software development.
• Understand software development intricacies to hire a team that won’t make empty promises.
•
Select an agency experienced in discovery and business analysis, ready to structure requirements, make estimations and develop a realistic project plan.
To assess an outsourced team’s suitability, focus on their expertise and case-relevant references, not just their overall software development background.
Evaluate documentation, reporting and project structuring.
Assess their approach to timelines, project scope and customer expectations.
Finally, understand how the contractor addresses project failures, missed deadlines and setbacks.
Companies allocating part of their annual development budget to outsourcing often prefer a dedicated team managed by an agency, dealing with both in-house developers and external agency staff.
The crucial aspect is effectively merging the processes of these organizations.
Establishing a dedicated team through outsourcing requires careful integration into existing operations, clarity in roles and productive communication.
Clearly define and communicate individual roles.
Use the Responsible, Accountable, Consulted, Informed (RACI) matrix to specify each team member’s responsibilities.
Implement a cohesive process aligning the dedicated team’s software development life cycle (SDLC) with your organization’s SDLC.
Use a well-defined version control management (VCM) approach for code synchronization, branching and merging.
Leverage CI/CD daily builds and code style guides.
Foster a culture of open communication and establish regular synchronous and asynchronous communication channels.
An external agency takes a large portion of responsibility for your project.
Defining milestones, motivating the team and providing them with the North Star Metric (NSM) are crucial steps to making the most of their expertise.
Outstaffing involves integrating external engineers directly into your team, requiring hands-on management.
Challenges here include maintaining interest, fostering a sense of belonging and addressing high turnover rates.
Encourage open communication and regular updates on the project’s impact and goals.
Include external engineers in events, meetings and discussions to integrate them smoothly.
Recognize and appreciate their contributions to create a feeling of being valued as a part of the larger team.
Conduct thorough onboarding and orientation to familiarize engineers with the project and team dynamics.
Offer opportunities for skill development and career growth to encourage long-term commitment.
Verify that your outsourcing partner conducts in-depth background checks and maintains strong HR support.
All in all, success depends heavily on your unique needs and careful vendor choice.
Six Considerations For Choosing The Right Outsourcing Model And Vendor Picking a suitable outsourcing model and vendor is the way to avoid failure and make your project succeed.
To make a wise decision, focus on the following considerations.
Understand project size, complexity and specialized skill needs.
Consider whether you grasp the scope and can structure it in the project without heavily changing it along the way.
Check if your team has the necessary skills and bandwidth to manage software development.
Determine whether you can handle the project or need a vendor with strong consultancy skills.
Consider budget constraints and the ability to scale resources as needed.
Do you operate a fixed budget or have annual planning in place?
Is your budget flexible or based on the “last out-of-pocket” approach?
Finally, are you willing to pay monthly or based on project delivery?
Identify risks and select a model that addresses them.
Assess quality requirements and scalability needs for each model.
Ask yourself questions: “Am I prepared to choose motivation and involvement over strict deadlines and fixed prices?”
and “What’s my risk management approach if something goes wrong?”
Choose between project work, dedicated teams or outstaffing depending on your requirements, budget and stakeholder input.
Ensure you select one model and stick to it.
Trust your partner and work as a team toward success.
Put down all primary arrangements and milestones in email or SOW.
Note that creating overly extensive contracts can strain relationships.
However, documenting all agreed-upon terms is still vital.
Choose what’s best for you, and good luck finding your ideal vendor.
Countless companies have achieved growth in their tech operations through diligent research and professional consultation.
To ensure success, continuously monitor progress, gather feedback and be ready to adapt the chosen model for the desired outcomes.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
Back Print By Tom Murphy – Associated Press – Wednesday, December 20, 2023 Rite Aid has been banned from using facial recognition technology for five years over allegations that its surveillance system was used incorrectly to identify potential shoplifters, especially Black, Latino, Asian or female shoppers.
The settlement with the Federal Trade Commission addresses charges that the struggling drugstore chain didn’t do enough to prevent harm to its customers and implement “reasonable procedures,” the government agency said.
Rite Aid said late Tuesday that it disagrees with the allegations, but that it’s glad it reached an agreement to resolve the issue.
The FTC said in a federal court complaint that Rite Aid used facial recognition technology in hundreds of stores from October 2012 to July 2020 to identify shoppers “it had previously deemed likely to engage in shoplifting or other criminal behavior.”
The technology sent alerts to Rite Aid employees either by email or phone when it identified people entering the store on its watchlist.
The FTC said in its complaint that store employees would then put those people under increased surveillance, ban them from making purchases or accuse them in front of friends, family and other customers of previously committing crimes.
The federal complaint also said there were “numerous instances” where the technology incorrectly identified someone who entered the store, and Rite Aid failed to test its accuracy before using it.
It also said the company “failed to take reasonable steps to train and oversee the employees charged with operating the technology in Rite Aid stores.”
Rite Aid says the allegations center on a pilot program it used in a limited number of stores, and it stopped using this technology more than three years ago.
“We respect the FTC’s inquiry and are aligned with the agency’s mission to protect consumer privacy, the company said in a statement posted on its website.
“However, we fundamentally disagree with the facial recognition allegations in the agency’s complaint.”
Rite Aid also noted in a prepared statement that any agreement will have to be approved in U.S. Bankruptcy Court.
Rite Aid announced last fall that it was closing more than 150 stores as it makes its way through a voluntary Chapter 11 bankruptcy process.
Rite Aid Corp. , based in Philadelphia, has more than 2,000 locations.
The company has struggled financially for years and also faces financial risk from lawsuits over opioid prescriptions like its bigger rivals, CVS and Walgreens.
Copyright © 2023
The Washington Times, LLC.
Please read our comment policy before commenting.
Click to Read More and View Comments Click to Hide
A trade group associated with Meta, TikTok, and X is fighting back against a Utah law forcing minors to obtain parental consent and abide by a strict curfew in order to access social media.
Though lawmakers in Utah and a growing number of other states believe regulations like these are necessary to protect young users from online harms, a new lawsuit filed by NetChoice argues the laws go too far and violate First Amendment rights to free expression.
Utah officially passed its Social Media Regulation Act back in March.
The law, which is set to take effect March 1, 2024, is actually a combination of a pair of bills, SB152 and HB311.
Combined, the bills prohibit minors from opening a new social media account without first receiving written parental consent.
It also restricts minors from accessing social media between 10:30 p.m. and 6:30 a.m, unless the receive permission from their parent or guardian.
Tech platforms would be required to verify the age of its users.
Failure to do so could result in a $2,500 fine per violation.
Utah lawmakers supporting the law say it’s necessary to reduce young users’ exposure to potentially harmful material online such as eating disorder and self-harm related content.
Lawmakers say the curfew, one of the more controversial elements of the law, could help ensure minors aren’t having their sleep impacted by excessive social media use.
A US Surgeon General advisory report released earlier this year warned of potentially sleep deprivation linked to excessive social media use.
“While there are positive aspects of social media, gaming, and online activities, there is substantial evidence that social media and internet usage can also be extremely harmful to a young person’s mental and behavioral health and development,” Utah Attorney General Sean Reyes said during a press conference earlier this year.
NetChoice, in a suit filed Tuesday, claims the provisions violate Utahns’ First Amendment Rights and amounts to a “unconditional attempt to regulate both minors’ and adults’ access to—and ability to engage in—protected expression.”
The suit also takes aim at the law’s age verification requirement, which NetChoice argues would violate the privacy of all Utah social media users and ultimately do more harm than good.
“The state is telling you when you can access a website and what websites you can access,” NetChoice Vice President and General Counsel Carl Szabo told PopSci .
“Our founders recognized the dangers in allowing the government to decide what websites we can visit and what apps we can download.
Utah is disregarding that clear prohibition in enacting this law.”
The law wouldn’t just affect minors either.
Szabo said the law’s rules forcing platforms to verify the age of users under the age of 18 would, by definition, also result in the verifying the ages of users over the age of 18.
Social media companies would be required to use telecom subscriber information, a social security number, government ID, or facial analyses to verify those identities if the law takes effect.
Aside from its constitutional issues, Szabo and NetChoice argue the bill would harm young users in the state by putting them at a disadvantage to minors in other states who have access to more information.
The digital curfew, which the suit refers to as a “blackout” could restrict students from accessing educational videos or news articles during a large chunk of the day.
The suit claims the curfew could also interfere with young users trying to communicate across multiple time zones.
“The first amendment applies to all Americans, not just Americans over the age of 18,” Szabo said.
NetChoice is calling on courts to halt the law from taking effect while its lawsuit winds its way through the legal system.
That could happen.
The trade group already successfully petitioned a US District Court to halt a similar parental consent law from going into effect in Arkansas earlier this year.
Utah’s Attorney general spokesperson told PopSci, “The State of Utah is reviewing the lawsuit but remains intently focused on the goal of this legislation
Protecting young people from negative and harmful effects of social media use.”
State-wide online parental consent laws and bills regulating minors’ use of social media picked up steam in 2023 .
Texas, Arkansas, Louisiana, and Ohio have all proposed or passed legislation limiting minors’ access to social media and severely limiting the types of content platforms can serve them.
Some state laws, like the one in Utah, would go a step further and grant adults full access to a child’s account and ban targeted advertising to minors.
Supporters of these state bills cite a growing body of academic research appearing to draw links between excessive social media use and worsening teen depression rates.
But civil liberties organizations like the ACLU say these efforts, though often well intentioned, could wind up backfiring by stifling minors’ freedom of expression and limiting their access to online communities and resources.
Szabo, of not NetChoice, said states should step away from online line parental consent laws broadly and instead invest in digital wellness or education campaigns.
Businesses actively invest in e-commerce and expect their investments to provide a return.
To correctly assess the ROI, it is essential to understand the total cost of ownership (TCO) of e-commerce: what components it contains and how to manage them.
McKinsey researchers that “organizations should understand what systems or operating practices contribute to application costs to make sound financial-management decisions.”
However, many companies do not clearly understand digital commerce TCO.
In this article, I share some insights on this.
According to Gartner’s , the total cost of ownership of an IT system includes “hardware and software acquisition, management and support, communications, user expenses, the opportunity cost of downtime” and so on.
The list is quite extensive.
However, this classification is too generic and can be used to describe any business IT system.
Using such a classification for managing e-commerce TCO would be challenging.
This article aims to close this gap.
The Cost Of Licenses License costs include all the company’s fees incurred by owning an e-commerce solution.
It is essential to realize that besides licensing payments to the e-commerce platform provider, it also includes the cost of other licenses that the company must pay for the e-commerce solution to function successfully.
For example, if a company uses an AI service to personalize prices, its cost should also be taken into account.
The Cost Of Implementation The implementation cost includes all costs associated with launching a new e-commerce solution.
This encompasses efforts on requirements definition, design, software development, uploading data to the system and onboarding clients to a new platform.
Typically, companies focus on the cost of software development because these costs are significant and are often presented explicitly.
For example, it can be an estimate of the project cost offered by the implementation partner.
In reality, other costs, like the salaries of non-technical specialists involved in the project, are also significant but “hidden” in other spending pools and very often mistakenly excluded from the e-commerce cost analysis.
The Cost Of Operations Operation costs include indirect costs to ensure the smooth functioning of the e-commerce solution.
It may include expenses for infrastructure, database maintenance, software updates, user support and catalog maintenance, for instance.
A business spends resources on operations anyway, although the amount spent may vary as the scope of the business changes.
If a company uses a SaaS or PaaS digital commerce platform, the operation costs associated with infrastructure and maintenance will be built into the license cost.
This, however, does not mean that the company does not pay for them.
The Cost Of Innovation The cost of innovation seems to be some sort of “dark matter” in e-commerce expenses.
This is the least realized category of business expenses, often poorly managed by businesses.
At the same time, the cost of innovation significantly impacts business success.
The cost of innovation includes all costs associated with modifying the e-commerce solution after implementation.
As soon as the new e-commerce solution is launched, the company starts receiving tons of innovation requests through customer feedback, as well as requests for new integrations, customer experience improvements and more.
This is when a business realizes that the development does not end with the solution implementation.
The market keeps changing, and to successfully overrun its competitors, businesses need to improve their customer experience aggressively.
The more successful an e-commerce company is, the more changes are required to keep its leadership.
Keeping up with the innovation trends can be challenging: Businesses must collect feedback from their clients, partners and employees, intelligently determine business priorities and professionally design new customer experiences.
But it’s crucial to understand that all this effort can bring profit only when the implementation of changes is affordable in terms of technology and processes.
All this means that for companies in a highly competitive environment, analyzing the cost of innovation in detail is essential.
And they must pay close attention to their technologies’ ability to reduce the cost of innovation.
Insights On How To Manage E-Commerce Costs Efficiently • To adequately manage your e-commerce investments, consider all components of the TCO when analyzing costs.
Also, make sure you are forecasting with reasonable accuracy.
•
Paying attention to what your expenses include is crucial.
For example, besides e-commerce platform licenses, the cost of licenses includes the cost of other related software.
As for the implementation costs, besides development fees, they also include salaries of non-dedicated employees and so on.
•
It is essential to differentiate the cost of operations and innovation.
Managing them separately will allow for further evaluation of the innovation ROI.
•
When accounting for a significant e-commerce investment, such as launching new digital channels or replacing the e-commerce platform, adding the operations and innovation costs forecasted for the next two to three years to the implementation cost is essential.
This will help ensure that the company has budgeted enough to support the initiative before it begins to pay off.
Investing in e-commerce can be expensive, so assessing the ROI and TCO correctly is essential.
Although every business must also consider the particularities of their industry, the TCO components described in this article should always be considered.
is an invitation-only community for world-class CIOs, CTOs and technology executives.
The Samsung Galaxy S24 reveal is on the horizon and many users are looking forward to the next generation of Samsung phones.
However, a recent leak for known tipster @Tech_Reve may have just revealed a less-than-significant camera upgrade that’ll dash the spirits of smartphone fans.
I’m sorry…
everyone…
It’s reported that Samsung may use the GN3 in the S25/25+. https://t.co/Hft9FS4cCf December 18, 2023 According to an X post, the primary camera sensor that was built into the Galaxy S23 series, the ISOCELL GN3, will be reused in both the Galaxy S24 and S25 models.
The sensor offers a 50 MP resolution with a 1/1.56 image sensor and certainly passes muster.
It’s one of the larger sensors in the smartphone market today and allows for a larger pixel size, but it’s still disappointing news.
This is another potential blow for avid photography Samsung users, especially after the recent leaks that seemingly confirmed that the Galaxy S24 Ultra would not be receiving the rumored 10x telephoto lens.
According to the leak, it was stated that the S24 Ultra would instead get a 50 MP 5x telephoto lens and a 10 MP 3x Telephoto lens, which some consider to be a strange zoom range due to the relative similarity in focal length.
While it may seem like Samsung is stifling much of its hardware upgrades it should be noted it’s far from unusual for developers to reuse parts.
This is often to keep costs down, as well as make mass production easier.
It is also possible that Samsung is aiming to make the best use of its AI to cover the older technology.
The reported features of the One UI 6.1 and Samsung Gauss include a border extension feature that uses AI to expand the picture.
It will also apparently be possible to edit out subjects in videos and improve the general video and picture quality.
The question of whether the inclusion of AI removes the need for constant hardware improvements is a difficult one to answer.
While the implication is that AI will be able to mitigate the downsides of older hardware, we don’t know how effective it will really be.
However, if it can cover the older parts then it may change how smartphones get made It should be noted that, until the phones are released, these rumors are frequently subject to change.
We will only know the exact specifics when the phones are announced and we can test them.
It appears that the Galaxy S24 and Galaxy S24 Ultra will likely be fully revealed at the next Galaxy Unpacked , which is rumored to be happening around mid-January.
Source: tomsguide
No Comments