Saturn's rings will disappear this month as scientists reveal reason why

Saturn’s rings will disappear this month as scientists reveal reason why

Saturn's rings will disappear this month as scientists reveal reason why

Saturn’s rings will disappear this month as scientists reveal reason why

Fear not though, Saturn’s rings will come back into view

Saturn’s rings will disappear later this month, and there’s a very bizarre reason why, which has been explained by scientists.

Taking you way back to science classes at school now, but you may have learnt that Saturn is made up of chunks of ice, as well as rock and dust.

It was astronomer Galileo Galilei who discovered the gigantic rings way back in 1610 and they have been observed and looked at ever since.

At one point, scientists believed that it would take approximately 300 million years for them to disappear completely.

In fact, data collected by NASA’s Cassini spacecraft back in 2017 revealed it’s expected to actually take 100 million years for the rings to disappear permanently.

Saturn’s rings will melt away thanks to the Sun’s UV radiation and other meteoroids colliding with the rings and causing the ice particles to vaporise.

Of course, scientists are predicting that to be far in the future, but from later this month, Saturn’s iconic rings will no longer be visible from Earth.

This is set to happen in two weeks on March 23, but what is the actual science behind it?

Saturn’s rings can normally be seen with a small telescope, but the planet will soon tilt in a way that means the rings will be out of view.

IFL Science reports the ‘angle of tilt’ will ‘drop to zero when it gets to 23 March, 2025’, but fear not, the rings will later return to our view.

Such phenomenon takes place every 29.5 years, which is the time in takes for Saturn to orbit the Sun.

Saturn's rings are disappearing (Getty Stock Photo)

Saturn’s rings are disappearing (Getty Stock Photo)

On top of this, scientists remain perplexed by large smudges – dubbed ‘spokes’ by NASA – appearing on Saturn’s rings every 15 years or so.

Experts and scientists are now working hard to garner a deeper understanding of exactly what is going on.

NASA planetary scientist Amy Simon said: “Thanks to Hubble’s OPAL program, which is building an archive of data on the outer solar system planets, we will have longer dedicated time to study Saturn’s spokes this season than ever before.”

As for seeing Saturn’s rings again, scientists say the rings will be at its brightest and best to see from Earth on 21 September, but be warned that they will disappear once more in November.

Saturn’s rings will disappear this month as scientists reveal reason why Read More
Apple co-founder takes aim at Elon Musk as he reveals huge problem with government position

Apple co-founder takes aim at Elon Musk as he reveals huge problem with government position

Apple co-founder takes aim at Elon Musk as he reveals huge problem with government position

Apple co-founder takes aim at Elon Musk as he reveals huge problem with government position

Elon Musk is CEO of Tesla and SpaceX, the owner Twitter, and now also spearheading US government advisory body DOGE

Apple co-founder Steve Wozniak didn’t hold back as he reflected on Elon Musk’s influence on Donald Trump’s administration.

In case you haven’t paid any attention to US politics for the last couple months… Elon Musk is basically everywhere.

The Tesla CEO and billionaire was in the news a lot over the years following his acquisition of Twitter, but he has been in the spotlight even more so (if possible) since he allied with Donald Trump.

Despite not being an elected official, Musk appears to be influencing Trump’s administration, at least in an advisory capacity, and the billionaire has taken plenty of flack for this.

Wozniak criticized Musk for directly veering into politics (Andreas Rentz/Getty Images)

Wozniak criticized Musk for directly veering into politics (Andreas Rentz/Getty Images)

Most recently Wozniak has made it clear how he feels about this situation.

The 74-year-old who, founded Apple with Steve Jobs in 1976, spoke to hundreds of attendees at Barcelona’s Talent Arena developers fair on March 4 and took issue with Big Tech getting way to caught up in politics.

However, he did acknowledge that due to major tech companies being so big, they have a clear interest in lobbying politicians.

Wozniak said: “Technology companies are huge, they’re huge and [because they are] worth that much money, they have to have some political involvement.

“But actually taking a direct role because they’ve made it big in technology, I don’t like that at all.

As he began to criticize Musk, he also added that ‘the skills of politics are very different than the skills for technology companies to have success.’

He continued: “When you run a business, you look around and you look for a consensus. If half your employees feel one way and half the other way, you negotiate, you compromise.

Elon Musk has been making waves in politics since allying with Trump (Andrew Harnik/Getty Images)

Elon Musk has been making waves in politics since allying with Trump (Andrew Harnik/Getty Images)

“I don’t see that happening in the case of Elon Musk… you don’t just say everything is out and start fresh.”

And since leading DOGE (Department of Government Spending) Musk hasn’t been shy about making waves among the status quo when it comes to government.

Late last month, he sent an email to all US federal staff and suggested that those who didn’t reply would be fired.

The email demanded that workers outline five tasks they’d completed in the last week with a deadline of 11.59pm on Monday or their failure to respond would be taken as a resignation.

Musk later explained the email on Twitter and said: “This was basically a check to see if the employee had a pulse and was capable of replying to an email.

“Lot of people in for a rude awakening and strong dose of reality. They don’t get it yet, but they will.”

Apple co-founder takes aim at Elon Musk as he reveals huge problem with government position Read More
Cybertruck owner reveals how much it really costs to fully charge and people are horrified

Cybertruck owner reveals how much it really costs to fully charge and people are horrified

Cybertruck owner reveals how much it really costs to fully charge and people are horrified

Cybertruck owner reveals how much it really costs to fully charge and people are horrified

David Nguyen took to Instagram to share a video of him charging up his Tesla Cybertruck

While the Cybertruck is seen on the roads far more often nowadays, there’s no doubt many of us remain extremely intrigued by the futuristic-looking vehicle.

To be honest, there’s been a few negative stories surrounding the Tesla-owned car in the press, from a video showing a Cybertruck struggling to move in heavy snow while other vehicles drove past to an owner warning others to avoid his ‘mistake’ after terrifying crash while using the Cybertruck’s self-driving feature.

Despite that, many are still looking at investing into the future with the vehicle, but with electric cars typically considered more expensive to buy upfront compared to petrol and diesel, there’s a lot to think about.

One advantage of EVs is that they are usually cheaper to run compared to petrol and diesel – though the Cybertruck appears to be the exception.

Entrepreneur David Nguyen took to Instagram to share a video of him charging up his Cybertruck to one full charge after realizing he’d reached a charge of just three percent.

Now, you’d expect some swanky sort of movement detection or a face scanner to open up the charging port of the vehicle – it is a Tesla after all – however, Nguyen has to give the outside of the truck a few taps with the charger to get the port to open sesame.

In he plugs it, getting back in the car to reveal how long it’ll take to charge the truck up to 100 percent. Drum roll please.

The cost to charge a Tesla Cybertruck to 100 percent may surprise you (Instagram/@utechpia.dev)

The cost to charge a Tesla Cybertruck to 100 percent may surprise you (Instagram/@utechpia.dev)

And it’s a whopping ‘one hour and 30 minutes remaining’ to reach the charge limit of 296 miles ‘a.k.a. 100 percent’.

Nguyen explained it’ll be 121kW of energy to reach a full charge, but how much will this cost?

Well, in San Leandro, California, the rate of electricity is $0.61 per kWh and Nguyen kindly breaks down the maths so we don’t have to do it ourselves.

He explained: “That’s 121 x 0.61 that is $73 to give me a full tank. That’s going to give me a range of 296 miles. Is that good? Or not good?”

Nguyen resolved: “I think it’s pretty good because you’re driving the future a.k.a the Cybertruck.”

As expected though, not everyone agreed.

One Instagram user wrote: “$70???? I thought electric was cheaper. Why are we doing this again.”

Another added: “80 dollars to fill up my GMC 1500 Sierra Diesel. Almost 600 miles with the full tank lmao . So NOT GOOD. Oh and it only took me four and half minutes to fill up.”

While a third commented: “So it’s the same to fill up a gas SUV, get further distance and it takes 10 minutes compared to 1.5 hours??? Think I’ll stick to combustion.”

And a fourth resolved: “Absolute joke.”

Cybertruck owner reveals how much it really costs to fully charge and people are horrified Read More
Warning issued to 3,200,000 Google Chrome users over dangerous hacking scam

Warning issued to 3,200,000 Google Chrome users over dangerous hacking scam

Warning issued to 3,200,000 Google Chrome users over dangerous hacking scam

Warning issued to 3,200,000 Google Chrome users over dangerous hacking scam

Hackers are having success with 16 Google Chrome extensions

A warning has been issued to more than 3.2 million people who use Google Chrome every day over a dangerous hacking scam involving the popular web browser.

Whether it be for work purposes or for browsing the web, millions across the globe use Chrome for hours a day on desktop and mobile. And those who use the browser should be aware of a warning about 16 different browser extensions that have been compromised by hackers.

The list of extensions being impacted include Blipshot, Emojis, Color Changer for YouTube, Video Effects for YouTube and Audio Enhancer, Themes for Chrome and YouTube Picture in Picture and Mike Adblock for Chrome, Super Dark Mode and Emoji Keyboard Emojis for Chrome, as per the Daily Mail.

Meanwhile, Adblock for Chrome, Nimble Capture, KProxy and Page Refresh, Wistia Video Downloader, Adblocker for Chrome and Adblock for You are also said to be influenced by hackers.

First time commercial astronauts have completed a spacewalk from a commercial spacecraft’
A warning has been issued to Google Chrome users (Getty Stock Photo)

A warning has been issued to Google Chrome users (Getty Stock Photo)

GitLab Threat Intelligence, who uncovered the dodgy scheme, stated on their website: “We identified a cluster of at least 16 malicious Chrome extensions used to inject code into browsers to facilitate advertising and search engine optimization fraud.

“The extensions span diverse functionality including screen capture, ad blocking and emoji keyboards and impact at least 3.2 million users.

“The threat actor uses a complex multistage attack to degrade the security of users’ browsers and then inject content, traversing browser security boundaries and hiding malicious code outside of extensions. We have only been able to partly reproduce the threat actor’s attack chain.”

The team of computer experts noted that while these extensions have been deleted from the Web Store, those who already have any of them downloaded will need to delete them manually to steer clear of the hackers.

“The threat actor may also be associated with phishing kit development or distribution. The malicious extensions present a risk of sensitive information leakage or initial access,” GitLab Threat Intelligence added on their site.

Hackers are targeting 16 extensions (Jaap Arriens/NurPhoto via Getty Images)

Hackers are targeting 16 extensions (Jaap Arriens/NurPhoto via Getty Images)

Cybercriminals are seemingly using all the right tricks to take advantage of innocent web users, with them receiving targeting Gmail users.

Spencer Starkey, a vice-president at SonicWall, has stated companies such as Google need to be on their toes to ensure their users are safe.

He said: “Cybercriminals are constantly developing new tactics, techniques, and procedures to exploit vulnerabilities and bypass security controls, and companies must be able to quickly adapt and respond to these threats.

“This requires a proactive and flexible approach to cybersecurity, which includes regular security assessments, threat intelligence, vulnerability management, and incident response planning.”

Warning issued to 3,200,000 Google Chrome users over dangerous hacking scam Read More
Markerless motion capture system opens up biomechanics for a wide range of fields

Markerless motion capture system opens up biomechanics for a wide range of fields

Researchers develop markerless motion capture system to push biomechanics "into the wild"

Researchers at CAMERA have developed technology that can analyze body movement from 2D video footage without the need for markers. Credit: University of Bath

Researchers at CAMERA, the University of Bath’s Centre for Analysis of Motion, Entertainment and Research Applications, have developed open access software that analyzes motion capture data, without using markers. They have shown the markerless system to offer clinicians, sports coaches and physiotherapists an unobtrusive way of analyzing body movements from video footage that is comparable to using markers.

Motion analysis traditionally relies on attaching light-reflective markers onto specific points on the body; the movement of these markers in 3D space is then calculated using data from an array of cameras that film the person’s movements from different angles.

Placing markers accurately on the body can be time-consuming to set up and can sometimes interfere with the person’s natural movements.

To overcome this, the team at CAMERA led by Dr. Steffi Colyer, has developed a non-invasive markerless system using computer vision and deep learning methods to measure motion by identifying body landmarks from regular 2D image data.

Using the same images to evaluate the performance of their fully automated system, they found the results were comparable to that of a traditional marker-based motion capture system. The system works on similar technology to that used by commercial systems, but is available as an open-source workflow and can be adapted to user’s needs more easily.

The team has released a unique dataset to allow other researchers to evaluate new markerless algorithms and further progress the fields of computer vision and biomechanics.

The team used an open source computer vision system, OpenPose, to estimate the position of the joints on a 2D video image of a person running, jumping and walking. They then fuse the data in 3D and input those data into open-source modeling software called OpenSim, which fits a skeleton to the joints and allows the whole body motion to be obtained.

The fully synchronized video and marker-based data used in this study, along with the code underpinning the markerless pipeline are now available and are fully described in a paper recently published in Scientific Data.

Researchers from CAMERA, the University of Bath’s motion analysis center, have developed mo cap technology that could be used by clinicians, physios and sports coaches to analyze body movement for free. Credit: University of Bath

Dr. Colyer said, “The trouble with using markers is that they can be tricky to place on a participant accurately and reliably and this process can take a long time, which isn’t very practical for many participants and applications (for example elite athletes or clinical populations).

“Our markerless system estimates the joint positions from video alone without the need for any equipment to be placed on the participant or any preparation time. This opens the door for us to capture motion data more readily in settings outside of the laboratory and the outcomes for the movements we analyzed are comparable to traditionally-used techniques with markers.

“Our pipeline is open source, which means that anyone with some expertise in the area can use it for free to get movement data from normal video footage.

“This could be useful for physiotherapists, clinicians and sports trainers in a wide range of applications including sports performance and injury prevention or rehabilitation. Additionally, the accompanying data set provides the first high-quality benchmark to evaluate emerging algorithms in this rapidly evolving field.

“We have used the system to measure the biomechanics of skeleton athletes during their push-starts and we have recently taken it out on to the tennis and badminton courts to unobtrusively monitor how much work the players are performing during training and match play.”

Markerless motion capture system opens up biomechanics for a wide range of fields Read More
This is the most underrated Apple Intelligence feature in iOS 18.2

This is the most underrated Apple Intelligence feature in iOS 18.2

Apple Intelligence features may be landing on your iPhone, but that doesn’t mean they’ll stay fixed in place once they get there. Because they’re powered by artificial intelligence, Apple Intelligence capabilities have the potential to get smarter as Apple refines its models. And there’s always the possibility of Apple expanding what already-launched features can do.

The latter happened to Writing Tools with the iOS 18.2 update. Writing Tools arrived among the first batch of Apple Intelligence features in October’s iOS 18.1 release with the promise of improving your writing. Besides checking spelling and grammar, Writing Tools could also make suggestions on tone with presets that allow you to make any text you’ve written more professional, friendly or concise. There’s also a Rewrite option in Writing Tools for making wholesale changes to your document.

In my updated iOS 18 review, I wasn’t terribly complimentary toward Writing Tools. Aside from the Professional preset, which does a good job of observing the formal rules and structure that are a part of formal writing, the other options seemed to be satisfied with swapping in a few synonyms and sanding off any hint of writing voice. The end result usually resulted in robotic text — quite the opposite of what I think we should strive for in writing.

Describe Your Change in the Writing Tools of iOS 18.2 running on an iPhone 15 Pro

(Image credit: Future)

iOS 18.2 expands the arsenal of commands at Writing Tools’ disposal with a new Describe Your Change feature. Instead of relying on the presets, you can type out a text command like “make this more academic” or “make this sound more fun.” The idea seems to be to give you more control over the changes the AI makes to your writing.

Does the addition of Describe Your Change make me reassess the value of Writing Tools? And just how well does Apple Intelligence respond to your editing suggestions? I Took Describe Your Change for a test drive, and here’s what I found out.

How to use Describe Your Change in Writing Tools

Go to Apple Intelligence Menu to reach Writing Tools and tap the text field to Describe Your Change

(Image credit: Future)

Describe Your Change works like any other part of Writing Tools, which is available to any iOS app that accepts text input. Just select the text you’re looking to improve and Writing Tools will appear in the pop-up menu that also contains commands like Copy, Paste and whatnot. Some built-in apps like Notes will also feature an Apple Intelligence icon in their toolbar that summons Writing Tools.

Describe Your Change is now listed at the top of the Writing Tools menu that slides up from the bottom of the screen. It’s a text field that appears above the Proofread and Rewrite buttons. Just tap on the field and type in directions for Writing Tools. Tap the up arrow in the right side of the text field to put Writing Tools to rework at recasting your text.

How Describe Your Change performs

To see how well Describe Your Change follows instructions, I tried three different passages that I wrote in Notes. Two of the test documents were originals; the third was a well-known bit of dialogue from a movie. In addition to seeing if Describe Your Change delivered the changes I was looking for, I also checked to see if the AI tool improved my writing.

Test 1: Make this more enthusiastic

In my first sample text, I wrote a memo to members of my team that describes our next project — rolling a rock endlessly up a hill. Let me tell you, Sisyphus is getting the short end of the stick with this assignment, so I wanted to see if the Describe Your Change command could make my instructions a little livelier.

Describe your change text sample with "make this more enthusiastic" before and after results

Text before using Describe Your Change (left) next to results from a “Make this more enthusiastic” command (right) (Image credit: Future)

While Writing Tools has apparently decided that exclamation marks indicate enthusiasm, I do have to admit that AI did a credible job of making the prospect of rolling a rock up a hill sound very exciting. A passage in the original text where I said I was “looking forward to great things and tangible results” became a section where I talked up “the amazing things and incredible results we’re going to achieve together.”

Writing Tools can lay it on a little thick, inserting a “How exciting is that?” right after I explained to Sisyphus that 50% of his bonus was tied to keeping the rock at the top of the hill. That interjection came across as not terribly sincere. But overall, you can’t fault Writing Tools for making my original text more enthusiastic. Sisyphus is now addressed as “the superstar tasked with getting that rock to the top of the hill,” and the memo now ends with a “Good luck, and let’s rock this project!” I like to think the pun was intentional.

Test 2: Make this more humble and earnest

I wanted to see how Writing Tools responded to a request with multiple instructions, so I took a letter to Santa Claus that I would best describe as “brusque” and “demanding” to see if the AI could make it sound a little more accommodating. I asked Writing Tools to pump up the humility and make the requests for presents seem a little less like expectations.

A letter to Santa (left) before using the Describe Your Change command to make the letter more humble and earnest

A Christmas letter to Santa (left) next to results from a “Make this more humble and earnest” command (right) (Image credit: Future)

For the most part, Writing Tools did a decent job making me sound less like an expectant brat. A passage where I asked for a PS5 or its cash equivalent went largely unchecked, but my assumption that Santa would obviously bring games to go with my PS5 had the rough edges sanded off. (“It would be wonderful to have some games to enjoy on it,” the AI-assisted me told Santa.)

The strongest element with Writing Tools’ pass through my letter was that it really emphasized my gratitude for any gift Santa brought. An assertion that I had been a very good boy who deserved presents became something less assuming: “I hope I’ve been a good boy, as I always strive to be. If my wishes could reflect that, I would be truly grateful.” The first part of that last sentence is phrased a little awkwardly, but at least Writing Tools captured the sentiment I had suggested.

Test 3: Make this friendlier

So far we’ve seen what Writing Tools and Describe Your Change can do with my writing. But what about Academy Award-winning screenwriter Aaron Sorkin? If you’ve seen the film adaptation of his “A Few Good Men” play, you doubtlessly remember the “You can’t handle the truth” speech that Jack Nicholson gives in the climactic courtroom scene. And if you’re like me, you probably wondered, “What if that Marine colonel had just been a little nicer?”

A passage from a few good men before and after using describe your change to make it friendlier

A Few Good Men speech (left) next to the results from a “Make this friendlier” command (right) (Image credit: Future)

So I took that speech, pasted it into Notes and told the Describe Your Change tool to “Make this friendlier” — a pretty tall task given the ferocity of Colonel Jessep’s words. And Writing Tools may have sensed it wasn’t up to the task, as I got a message warning me that the feature really wasn’t designed for this kind of writing. Nevertheless, I opted to continue, just for the purpose of testing the limits of Describe Your Change.

To give Writing Tools credit, it did make the passage friendly, but that involved some serious rewriting to the point where the original intent of the speech was lost to the four winds. “You don’t want the truth” became “Sometimes, the truth can be hard to accept.” But I think my favorite edit was to the closing line: “Either way, I don’t give a damn what you think you’re entitled to” became “Either way, I respect your perspective.” Friendlier, yes. What the author was going for, no.

Describe Your Change verdict

This new addition to Writing Tools only swung and missed on one of the three tests I threw its way, and in that instance, Writing Tools warned me that it was not really equipped to do what I was asking. In the other instances, Describe Your Change definitely struck the tone that I was looking for, and did so in a way that gave me finer control than the original presets in Writing Tools offered.

I think there are still limitations. One test I thought about including but eventually abandoned involved the passive voice — something a lot of writers struggle with. But asking Describe Your Change to “remove the passive voice” or “use active verbs” didn’t produce tangible results, leading me to conclude that’s not something the feature is really designed to do.

I’m not totally sold on Writing Tools yet. Even with the largely successful changes in tone, the AI still left behind some awkward sentences and phrases that didn’t always sound natural. Anyone using Writing Tools to check tone should still closely review any changes to make sure your intent hasn’t been drastically altered or that confusing word choice hasn’t been introduced to the text. And frankly, double-checking Writing Tools’ handiwork might take longer than just handling the editing yourself.

Still, it’s encouraging to see a tool I didn’t have much use for evolve from one iOS update to the next. Even if I never fully embrace Writing Tools it’s a positive sign for the rest of Apple Intelligence that Apple realizes there’s work still to be done to make its AI tools even better.

This is the most underrated Apple Intelligence feature in iOS 18.2 Read More
Light-Speed AI: MIT’s Ultrafast Photonic Processor Delivers Extreme Efficiency

Light-Speed AI: MIT’s Ultrafast Photonic Processor Delivers Extreme Efficiency

Fully Integrated Deep Neural Network Photonic Processor
Researchers demonstrated a fully integrated photonic processor that can perform all key computations of a deep neural network optically on the chip, which could enable faster and more energy-efficient deep learning for computationally demanding applications like lidar or high-speed telecommunications. Credit: Sampson Wilcox, Research Laboratory of Electronics.

A new photonic chip designed by MIT scientists performs all deep neural network computations optically, achieving tasks in under a nanosecond with over 92% accuracy.

This could revolutionize high-demand computing applications, opening the door to high-speed processors that can learn in real-time.

Photonic Machine Learning

Deep neural networks, the driving force behind today’s most advanced machine-learning applications, have become so large and complex that they are pushing the limits of traditional electronic computing hardware.

Photonic hardware, which uses light instead of electricity to perform machine-learning calculations, offers a faster, more energy-efficient solution. However, certain neural network operations have been difficult to achieve with photonic devices, forcing reliance on external electronics that slow down processing and reduce efficiency.

Breakthrough in Photonic Chip Technology

After a decade of research, scientists from MIT and collaborating institutions have developed a breakthrough photonic chip that overcomes these challenges. They demonstrated a fully integrated photonic processor capable of performing all essential deep neural network computations entirely with light, eliminating the need for external processing.

The optical device was able to complete the key computations for a machine-learning classification task in less than half a nanosecond while achieving more than 92 percent accuracy — performance that is on par with traditional hardware.

Photonic Neural Networks and Their Implications

The chip, composed of interconnected modules that form an optical neural network, is fabricated using commercial foundry processes, which could enable the scaling of the technology and its integration into electronics.

In the long run, the photonic processor could lead to faster and more energy-efficient deep learning for computationally demanding applications like lidar, scientific research in astronomy and particle physics, or high-speed telecommunications.

Research Team and Future Prospects

“There are a lot of cases where how well the model performs isn’t the only thing that matters, but also how fast you can get an answer. Now that we have an end-to-end system that can run a neural network in optics, at a nanosecond time scale, we can start thinking at a higher level about applications and algorithms,” says Saumil Bandyopadhyay ’17, MEng ’18, PhD ’23, a visiting scientist in the Quantum Photonics and AI Group within the Research Laboratory of Electronics (RLE) and a postdoc at NTT Research, Inc., who is the lead author of a paper on the new chip.

Bandyopadhyay is joined on the paper by Alexander Sludds ’18, MEng ’19, PhD ’23; Nicholas Harris PhD ’17; Darius Bunandar PhD ’19; Stefan Krastanov, a former RLE research scientist who is now an assistant professor at the University of Massachusetts at Amherst; Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research; Matthew Streshinsky, a former silicon photonics lead at Nokia who is now co-founder and CEO of Enosemi; Michael Hochberg, president of Periplous, LLC; and Dirk Englund, a professor in the Department of Electrical Engineering and Computer Science, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE, and senior author of the paper. The research was published on December 2 in Nature Photonics.

Machine Learning with Light

Deep neural networks are composed of many interconnected layers of nodes, or neurons, that operate on input data to produce an output. One key operation in a deep neural network involves the use of linear algebra to perform matrix multiplication, which transforms data as it is passed from layer to layer.

But in addition to these linear operations, deep neural networks perform nonlinear operations that help the model learn more intricate patterns. Nonlinear operations, like activation functions, give deep neural networks the power to solve complex problems.

In 2017, Englund’s group, along with researchers in the lab of Marin Soljacic, the Cecil and Ida Green Professor of Physics, demonstrated an optical neural network on a single photonic chip that could perform matrix multiplication with light.

But at the time, the device couldn’t perform nonlinear operations on the chip. Optical data had to be converted into electrical signals and sent to a digital processor to perform nonlinear operations.

“Nonlinearity in optics is quite challenging because photons don’t interact with each other very easily. That makes it very power consuming to trigger optical nonlinearities, so it becomes challenging to build a system that can do it in a scalable way,” Bandyopadhyay explains.

They overcame that challenge by designing devices called nonlinear optical function units (NOFUs), which combine electronics and optics to implement nonlinear operations on the chip.

The researchers built an optical deep neural network on a photonic chip using three layers of devices that perform linear and nonlinear operations.

A Fully-Integrated Network

At the outset, their system encodes the parameters of a deep neural network into light. Then, an array of programmable beamsplitters, which was demonstrated in the 2017 paper, performs matrix multiplication on those inputs.

The data then pass to programmable NOFUs, which implement nonlinear functions by siphoning off a small amount of light to photodiodes that convert optical signals to electric current. This process, which eliminates the need for an external amplifier, consumes very little energy.

“We stay in the optical domain the whole time, until the end when we want to read out the answer. This enables us to achieve ultra-low latency,” Bandyopadhyay says.

Achieving such low latency enabled them to efficiently train a deep neural network on the chip, a process known as in situ training that typically consumes a huge amount of energy in digital hardware.

“This is especially useful for systems where you are doing in-domain processing of optical signals, like navigation or telecommunications, but also in systems that you want to learn in real time,” he says.

The photonic system achieved more than 96 percent accuracy during training tests and more than 92 percent accuracy during inference, which is comparable to traditional hardware. In addition, the chip performs key computations in less than half a nanosecond.

“This work demonstrates that computing — at its essence, the mapping of inputs to outputs — can be compiled onto new architectures of linear and nonlinear physics that enable a fundamentally different scaling law of computation versus effort needed,” says Englund.

The entire circuit was fabricated using the same infrastructure and foundry processes that produce CMOS computer chips. This could enable the chip to be manufactured at scale, using tried-and-true techniques that introduce very little error into the fabrication process.

Scaling up their device and integrating it with real-world electronics like cameras or telecommunications systems will be a major focus of future work, Bandyopadhyay says. In addition, the researchers want to explore algorithms that can leverage the advantages of optics to train systems faster and with better energy efficiency.

Reference: “Single-chip photonic deep neural network with forward-only training” by Saumil Bandyopadhyay, Alexander Sludds, Stefan Krastanov, Ryan Hamerly, Nicholas Harris, Darius Bunandar, Matthew Streshinsky, Michael Hochberg and Dirk Englund, 2 December 2024, Nature Photonics.
DOI: 10.1038/s41566-024-01567-z

This research was funded, in part, by the U.S. National Science Foundation, the U.S. Air Force Office of Scientific Research, and NTT Research.

Light-Speed AI: MIT’s Ultrafast Photonic Processor Delivers Extreme Efficiency Read More
10,000x Faster: AI Discovers New Microscopy Techniques in Record Time

10,000x Faster: AI Discovers New Microscopy Techniques in Record Time

XLuminA Automated Optical Discovery Process
Artistic visualization of XLuminA’s automated optical discovery process. The setup shows laser beams being guided through a network of optical elements including beam splitters, spatial light modulators and mirrors. This represents how XLuminA explores vast experimental configurations to discover novel super-resolution microscopy techniques. The glowing paths highlight the system’s ability to find optimal routes for light manipulation automatically, enabling breakthrough optical designs previously unexplored by human researchers. Credit: Long Huy Dao and Philipp Denghel

XLuminA, an AI framework, enhances super-resolution microscopy by exploring vast optical configurations, rediscovering established techniques, and creating superior experimental designs.

Discovering new super-resolution microscopy techniques often requires years of painstaking work by human researchers. The challenge lies in the vast number of possible optical configurations in a microscope, such as determining the optimal placement of mirrors, lenses, and other components.

To address this, scientists at the Max Planck Institute for the Science of Light (MPL) have developed an artificial intelligence (AI) framework called XLuminA. This system autonomously explores and optimizes experimental designs in microscopy, performing calculations 10,000 times faster than traditional methods. The team’s groundbreaking work was recently published in Nature Communications.

Revolution in Microscopy: The Rise of Super-Resolution Techniques

Optical microscopy is a cornerstone of the biological sciences, enabling researchers to study the smallest structures of cellular life. Advances in super-resolution (SR) methods have pushed beyond the classical diffraction limit of light, approximately 250 nm, allowing scientists to see previously unresolvable cellular details. Traditionally, developing new microscopy techniques has relied on human expertise, intuition, and creativity—a daunting challenge given the vast number of possible optical configurations.

For example, an optical setup with just 10 elements selected from 5 different components, such as mirrors, lenses, or beam splitters, can generate over 100 million unique configurations. The sheer complexity of this design space suggests that many promising techniques may still be undiscovered, making human-driven exploration increasingly difficult. This is where AI-based methods offer a powerful advantage, enabling rapid and unbiased exploration of these possibilities.

“Experiments are our windows to the Universe, into the large and small scales. Given the sheer enormously large number of possible experimental configurations, its questionable whether human researchers have already discovered all exceptional setups. This is precisely where artificial intelligence can help,” explains Mario Krenn, head of the Artificial Scientist Lab at MPL.

AI’s Role in Discovering New Optical Configurations

To address this challenge, scientists from the Artificial Scientist Lab joined forces with Leonhard Möckl, a domain expert in super-resolution microscopy and head of the Physical Glycoscience research group at MPL. Together, they developed XLuminA, an efficient open-source framework designed with the ultimate goal of discovering new optical design principles.

The researchers leverage its capabilities with a particular focus on SR microscopy. XLuminA operates as an AI-driven optics simulator which can explore the entire space of possible optical configurations automatically. What sets XLuminA apart is its efficiency: it leverages advanced computational techniques to evaluate potential designs 10,000 times faster than traditional computational methods.

“XLuminA is the first step towards bringing AI-assisted discovery and super-resolution microscopy together. Super-resolution microscopy has enabled revolutionary insights into fundamental processes in cell biology over the past decades – and with XLuminA, I’m convinced that this story of success will be accelerated, bringing us new designs with unprecedented capabilities,” adds Leonhard Möckl, head of the Physical Glycoscience group at MPL.

Carla Rodriguez Crop
Dr. Carla Rodríguez, scientist in the research group of Dr. Mario Krenn at MPL. Credit: Jan Olle

XLuminA: A Breakthrough in Optical Simulation

The first author of the work, Carla Rodríguez, together with the other members of the team, validated their approach by demonstrating that XLuminA could independently rediscover three foundational microscopy techniques. Starting with simple optical configurations, the framework successfully rediscovered a system used for image magnification.

The researchers then tackled more complex challenges, successfully rediscovering the Nobel Prize-winning STED (stimulated emission depletion) microscopy and a method for achieving SR using optical vortices.

Finally, the researchers demonstrated XLuminA’s capability for genuine discovery. The researchers asked the framework to find the best possible SR design given the available optical elements. The framework independently discovered a way to integrate the underlying physical principles from the aforementioned SR techniques (STED microscopy and the optical vortex method) into a single, previously unreported experimental blueprint. The performance of this design exceeds the capabilities of each individual SR technique.

“When I saw the first optical designs that XLuminA had discovered, I knew we had successfully turned an exciting idea into a reality. XLuminA opens the path for exploring completely new territories in microscopy, achieving unprecedented speed in automated optical design. I am incredibly proud of our work, especially when thinking about how XLuminA could help in advancing our understanding of the world. The future of automated scientific discovery in optics is truly exciting!” says Carla Rodríguez, the study’s lead author and main developer of XLuminA.

Expanding the Capabilities of Microscopy Through AI

The modular nature of the framework allows it to be easily adapted for different types of microscopy and imaging techniques. Looking forward, the team aims to include nonlinear interactions, light scattering, and time information which would enable the simulation of systems such as iSCAT (interferometric scattering microscopy), structured illumination, and localization microscopy, among many others. The framework can be used by other research groups and customized to their needs, which would be of great advantage for interdisciplinary research collaborations.

Reference: “Automated discovery of experimental designs in super-resolution microscopy with XLuminA” by Carla Rodríguez, Sören Arlt, Leonhard Möckl and Mario Krenn, 10 December 2024, Nature Communications.
DOI: 10.1038/s41467-024-54696-y

10,000x Faster: AI Discovers New Microscopy Techniques in Record Time Read More
Facebook, WhatsApp and Instagram reported as down by tens of thousands

Facebook, WhatsApp and Instagram reported as down by tens of thousands

Facebook, WhatsApp and Instagram reported as down by tens of thousands

That’s a whole chunk of online communication gone

Facebook, WhatsApp and Instagram have been reported as down by tens of thousands of people as plenty have suffered from services outages.

The website DownDetector has highlighted a sudden spike in reports of the social media and messaging sites going down.

These sites, all owned by Meta, form a cornerstone of social media as it’s where so many people come to talk and share things, and with them going off even just for a short amount of time it’s tantamount to global chaos.

Meta released a statement confirming that some people were having problems.

In a statement posted on X they said: “We’re aware that a technical issue is impacting some users’ ability to access our apps.

“We’re working to get things back to normal as quickly as possible and apologise for any inconvenience.”

The Meta sites and the Facebook Messenger service all flagged on DownDetector has having thousands of reports of outages in the past few minutes which frankly is a catastrophe for those of us who rely on them for basic communication.

With so many people easily contactable through these sites their loss has left some flapping around in a panic, barely able to articulate their thoughts, and that’s just here at LADbible Towers.

It's not great, is it? (Justin Sullivan/Getty Images)

It’s not great, is it? (Justin Sullivan/Getty Images)

In times like this everyone else hops onto other available social media platforms to talk about their usual places going down.

People said that ‘naturally, everyone’s flocking to Twitter to confirm it’s not just them’, and indeed it is not just you alone in suffering through this, random internet user.

The comforting cavalcade of usual memes are being posted right now so the internet can congregate and commiserate that some parts of it just aren’t working very well right now.

There’s really very little to be done but hang around and keep refreshing your apps until they start working again.

You could crack on with some household chores and do the washing up or put a load of laundry on, but we know you’re going to just stick around on your phone until it’s all working again.

When one of these apps stops working you go to another one to ask if anyone else is having the same problem. (Anna Barclay/Getty Images)

When one of these apps stops working you go to another one to ask if anyone else is having the same problem. (Anna Barclay/Getty Images)

Then suddenly, whoosh, your refreshing appears to work and everything comes back on as though nothing had ever been wrong with it in the first place.

You’ll soon be able to get back on your favourite apps and get back to whatever it was you were doomscrolling before this happened.

It’s funny, you don’t realise how much you rely on these sites until they go down.

It’s like how having a blocked nose when you have a cold makes you really appreciate the joys of breathing through an unobstructed nasal passage that isn’t constantly running with snot.

Or when you develop toothache and forget what it was like to have teeth that didn’t hurt, but the time when it’s all fixed will come again.

Featured Image Credit: Justin Sullivan/Getty Images  Jakub Porzycki/NurPhoto via Getty Images

Facebook, WhatsApp and Instagram reported as down by tens of thousands Read More
OpenWrt Sysupgrade flaw let hackers push malicious firmware images

OpenWrt Sysupgrade flaw let hackers push malicious firmware images

Router attack

A flaw in OpenWrt’s Attended Sysupgrade feature used to build custom, on-demand firmware images could have allowed for the distribution of malicious firmware packages.

OpenWrt is a highly customizable, open-source, Linux-based operating system designed for embedded devices, particularly network devices like routers, access points, and other IoT hardware. The project is a popular alternative to a manufacturer’s firmware as it offers numerous advanced features and supports routers from ASUS, Belkin, Buffalo, D-Link, Zyxel, and many more.

The command injection and hash truncation flaw was discovered by Flatt Security researcher ‘RyotaK’ during a routine home lab router upgrade.

The critical (CVSS v4 score: 9.3) flaw, tracked as CVE-2024-54143, was fixed within hours of being disclosed to OpenWrt’s developers. However, users are urged to perform checks to ensure the safety of their installed firmware.

Poisoning OpenWrt images

OpenWrt includes a service called Attended Sysupgrade that allows users to create custom, on-demand firmware builds that include previously installed packages and settings.

“The Attended SysUpgrade (ASU) facility allows an OpenWrt device to update to new firmware while preserving the packages and settings. This dramatically simplifies the upgrade process: just a couple clicks and a short wait lets you retrieve and install a new image built with all your previous packages,” explains an OpenWrt support page.

“ASU eliminates the need to make a list of packages you installed manually, or fuss with opkg just to upgrade your firmware.”

RyotaK discovered that the sysupgrade.openwrt.org service processes these inputs via commands executed in a containerized environment.

A flaw in the input handling mechanism originating from the insecure usage of the ‘make’ command in the server code allows arbitrary command injection via the package names.

A second problem RyotaK discovered was that the service uses a 12-character truncated SHA-256 hash to cache build artifacts, limiting the hash to only 48 bits.

The researcher explains that this makes brute-forcing collisions feasible, allowing an attacker to create a request that reuses a cache key found in legitimate firmware builds.

By combining the two problems and using the Hashcat tool on an RTX 4090 graphics card, RyotaK demonstrated that it’s possible to modify firmware artifacts to deliver malicious builds to unsuspecting users.

Python script used for overwriting legitimate firmware builds
Python script used for overwriting legitimate firmware builds
Source: Flatt Security

Check your routers

The OpenWrt team immediately responded to RyotaK’s private report, taking down the sysupgrade.openwrt.org service, applying a fix, and getting it back up in 3 hours on December 4, 2024.

The team says it’s highly unlikely that anyone has exploited CVE-2024-54143, and they have found no evidence that this vulnerability impacted images from downloads.openwrt.org.

However, since they only have visibility for what happened in the last 7 days, it is suggested that users install a newly generated image to replace any potentially insecure images currently loaded on their devices.

“Available build logs for other custom images were checked and NO MALICIOUS REQUEST FOUND, however due to automatic cleanups no builds older than 7 days could be checked. Affected server is reset and reinizialized from scratch,” explains OpenWrt.

“Although the possibility of compromised images is near 0, it is SUGGESTED to the user to make an INPLACE UPGRADE to the same version to ELIMINATE any possibility of being affected by this. If you run a public, self-hosted instance of ASU, please update it immediately.”

This issue has existed for a while, so there are no cut-off dates, and everyone should take the recommended action out of an abundance of caution.

OpenWrt Sysupgrade flaw let hackers push malicious firmware images Read More