Tag Archives: discovery

Scientists Can Finally Study Einsteinium 69 Years After Its Discovery

This site may earn affiliate commissions from the links on this page. Terms of use.

The 20th century was notable for numerous reasons, not least of which that humanity split the atom. In the remnants of atomic explosions, scientists found never-before-seen elements like einsteinium. Now, almost 70 years after its discovery, scientists have collected enough einsteinium to conduct some basic analysis

Scientists understood that something should exist on the periodic table where einsteinium sits (atomic number 99), but the material had never been identified before 1952, which is when the United States set off the “Ivy Mike” thermonuclear bomb in the Marshall Islands (see above). However, einsteinium is extremely unstable, and it decayed before we could learn much about it. That’s been the case for the intervening 69 years, until now. 

We no longer have to make einsteinium with hydrogen bombs, thank goodness. Scientists have a regular, if meager, source of einsteinium from Oak Ridge National Laboratory’s High Flux Isotope Reactor. This device is used to produce heavy elements like californium (atomic number 98). Scientists make californium because it’s an excellent source of neutrons, but the process also yields some einsteinium. Usually, einsteinium is mixed up with other materials and decays rapidly into berkelium and then into californium. 

Researchers from Lawrence Berkeley National Laboratory managed to isolate a tiny sample of pure einsteinium, a mere 200 nanograms. Previously, 1,000 nanograms was considered the smallest sample suitable for analysis, but the team prepared their einsteinium for testing and completed an X-ray absorption spectroscopy series. The sample showed a blueshift in the emitted light, meaning the wavelength was shortened. They expected a redshift; longer wavelengths. This suggests einsteinium’s bond distances are a bit shorter than predicted based on nearby elements on the periodic table. 

A tiny sample of einsteinium. It glows from the intense radiation as it decays.

So that’s potentially fascinating science! But the coronavirus pandemic ruined the experiment as it has so many other things in the past year. The team was unable to complete X-ray diffraction testing that would have told us more about the electron and molecular bond structures of einsteinium before the lab was closed. When the team was again able to access their experiment, too much of the sample had decayed into californium — einsteinium decays at a rate of about 3.3 percent each day. Therefore, the contaminated sample was no longer suitable for testing. 

The good news is more einsteinium will be available from the reactor every few months. This first step will pave the way for future research on this mysterious element.

Now read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

Lab tech who found B.C.’s 1st case of COVID-19 recalls ‘sheer terror’ of discovery

In the early days of the pandemic, Rebecca Hickman would carefully watch each sample being tested for the novel coronavirus in her lab at the B.C. Centre for Disease Control.

“I was so afraid of getting a positive,” the public health laboratory technologist told CBC this week.

That meant she was paying close attention as the first test came back positive at about 3:30 p.m. on Jan. 27, 2020.

“I actually started to see it get positive within a few seconds,” Hickman recalled. “My first feeling was sheer terror, from a personal point of view.”

The co-designer of B.C.’s test, medical laboratory technologist Tracy Lee, was in a meeting as the results were coming in. She remembers getting a call from Hickman and rushing to the lab to watch the test complete.

Lee felt “both fear and relief” as the test came back positive — fear for what this meant for the people of B.C., but relief that the test was working as planned.

Hickman shared those mixed emotions.

“To design, validate and implement a molecular laboratory test usually takes months if not years, and so to do that in the span of days is a huge achievement,” Hickman said.

There was also some excitement. She said she “felt like I was a part of something huge.”

Hickman spent the rest of that first afternoon sequencing a portion of the genome from the positive sample, and by midnight the lab had confirmed it was SARS-CoV-2, the virus responsible for COVID-19.

B.C. CDC laboratory technologists Tracy Lee and Rebecca Hickman worked together to design the initial COVID-19 test to detect the virus in B.C. It is still being used today. (Michael Donoghue/BCCDC)

It had been a 16-hour workday.

“I went home and slept for five hours, then came back,” she recalls.

The next day, British Columbians watched as Provincial Health Officer Dr. Bonnie Henry confirmed the inevitable. The virus was here in B.C.

“This is the first time in my life I’ve ever found things out before I read it in the news,” Hickman said.

‘Instability and craziness’

A year later, B.C. has confirmed 66,779 cases of the novel coronavirus and 1,189 people have died.

Hickman has gone from anxiously checking the totals after the daily afternoon update from health officials to barely noticing as B.C. records hundreds of cases each day. She says COVID fatigue is real.

There have been difficult times, like in the spring when lab supplies and personal protective equipment began to run out.

“The instability and craziness of it all has been the hardest part,” Hickman said.

Watch: Rebecca Hickman recalls finding B.C.’s first case of COVID-19

Rebecca Hickman was just nine months into her new job at the B.C. Centre for Disease Control when she confirmed B.C.’s first case of the novel coronavirus. 1:11

Today, much of her time is spent doing whole genome sequencing for about 15 to 20 per cent of COVID-19 cases.

That work helps health officials track the new, more infectious variants that have popped up in different parts of the world. It’s also used for outbreak response — scientists can determine how the virus is spreading through a community or health-care facility and whether cases are being introduced from new sources.

Hickman was just nine months into her job at the B.C. CDC when she discovered the first case.

She said she’s proud to have played a part in such a major moment in history.

“It has been easily the most difficult year of my life but also the most fulfilling. What we have achieved here over the last year is huge,” Hickman said.

Let’s block ads! (Why?)

CBC | Health News

IBM Halts Sales of Watson AI For Drug Discovery and Research [Updated]

This site may earn affiliate commissions from the links on this page. Terms of use.

As the AI hype-cycle has built, we’ve been treated to a plethora of claims about what sorts of improvements and breakthroughs the technology can deliver. One of the most fundamental — and potentially important — has been the idea that we can use AI to find new medicines and treatments for existing conditions where current options have come up short.

That promise has itself now come up short. IBM has announced that it will stop selling its Watson AI system as a tool for drug discovery. It’s a high-profile retreat for the company, which has aggressively marketed AI as being useful for these purposes and which ran into problems last year when reports indicated its systems had made improper, dangerous recommendations for cancer patients (the system’s recommendations were never put into effect).

While IBM cites sluggish sales as a reason for its withdrawal, deeper problems are potentially responsible. A recent deep dive by IEEE Spectrum puts context around these issues. The upshot: After years of work and a number of moonshot projects, IBM has remarkably little to show for its efforts. And the company has created a certain amount of ill will towards itself, IEEE writes, because it took an aggressive, marketing-first approach to AI and Watson, promising grandiose achievements that didn’t accurately portray what the system could actually reliably achieve.

Watson wowed the world with its performance on Jeopardy and an ability to analyze the relationships between words rather than treating them like search terms. In theory, Watson could use its engine to sort through reams of medical data in a similar fashion, finding the hidden signal within a system stuffed with noise. Reality has not cooperated. Of the small amount of research conducted on using AI to improve patient outcomes, none of it has involved IBM’s Watson.

The IEEE piece takes pains to note that IBM faced huge challenges in attempting to bring its AI program online and use it effectively for human medicine. Nothing like Watson (or what Watson was intended to be) has ever existed before. No one knew how to build it. Yoshua Bengio, a leading AI researcher at the University of Montreal, summarized the efforts to help AI understand medical texts and terminology thusly: “We’re doing incredibly better with NLP than we were five years ago, yet we’re still incredibly worse than humans.”

A Vexing Problem

Watson’s problem wasn’t that it didn’t work. The problem is, Watson doesn’t do the right stuff. While it quickly learned to ingest and process vast quantities of data, it had a great deal of trouble identifying the bits of information within a study that might lead doctors to actually change their process of care. This is particularly true if the relevant information was incidental to the main point of the research.

Image by IEEE Spectrum

Because patient data wasn’t always properly formatted or even chronologically arranged, the software had trouble understanding patient histories. And the system was incapable of comparing new cancer patients against databases of previous patients to discover hidden treatment patterns, because such practices would not be considered evidence-based. Making a strong recommendation from evidence-based medicine requires double-blind studies, meta-analyses, and systemic evidence reviews, not an AI system claiming to have found a similarity between different types of patients.

It’s not clear what’s next for Watson, if anything. The tool has had some success in narrow, tailored applications with less ambiguity. But despite dozens of planned initiatives, oceans of hype, and a great deal of investment, IBM’s Watson for Drug Discovery has clearly missed its own goals.

Update (4/22/019): IBM contests multiple aspects of this story. A spokesperson told us that the company is not discontinuing Watson for Drug Discovery, and that it is instead “focusing our resources within Watson Health to double down on the adjacent field of clinical development where we see an even greater market need for our data and AI capabilities.”

It’s not a discontinuation. It’s just a completely different focus in adjacent markets where IBM thinks it can earn more money.

Next, IBM disputes allegations that Watson Health has little to show for its efforts. It offers no particular proof for this claim, beyond noting (completely truthfully) that treating cancer is exceptionally difficult and medical progress is slow. This is a point we entirely agree with.

The question in play is not whether IBM is doing something worthwhile or important by focusing on cancer research, but whether or not its products are producing effective results. The consistent narrative from the companies and organizations that have used these products, thus far, is “No,” or at the very least, “Not at the level of skill and capability the marketing department promised.” IBM’s largest successes, according to IEEE Spectrum, have come from the specific application of AI to narrow, well-understood problems.

Finally, IBM notes that in a follow-up cancer treatment recommendation survey, its accuracy rate rose from 73 percent in 2017 to 93 percent according to a January 2018 survey. It is not clear precisely what drove this improvement or whether the gains are attributable to an improvement in Watson’s abilities or if other aspects of the evaluation process were changed to produce a superior result. The second test only focused on non-concordant cases from the first test rather than retesting the entire data set.

Now Read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

New Stone Tools Discovery Rewrites the History of Ancient Humans

This site may earn affiliate commissions from the links on this page. Terms of use.

Ever since Mary and Louis Leakey’s work at Olduvai Gorge, we’ve known that East Africa served as the cradle of humanity thanks to the bones and early stone tools found at that site. The tools recovered at Olduvai are referred to as Oldowan, and they represent the earliest widespread stone tools ever manufactured. An even earlier set of tools (Lomekwian) has been discovered in Kenya, dated to 3.3M years ago, but these tools are unrelated to the later Oldowan designs that either spread across the planet beginning roughly 2.6M years ago or whose principles of manufacture were discovered independently by different hominin groups.

Up until now, and with the exception of the Lomekwian discovery, the known history of ancient hominins has indicated that the first tool-using cultures arose 2.6M years ago in East Africa. From 2.6M to 1.7M years ago, Oldowan technology spread across the globe, with Homo erectus inheriting Oldowan technology and adapting it into the more advanced Acheulean type of stone tool manufacturing beginning about 1.7M years ago. Up until now, the evidence has seemed quite clear — tool use arises in East Africa and spreads out from there, moving along with hominin groups themselves. But a new discovery in Algeria, at Ain Boucherit, could upend our understanding of these timelines. New digs at that location have uncovered stone tools dating back to 1.92 – 2.4M years ago.


Sites of major Oldowan tool finds. Image by Wikipedia

This suggests a pattern of hominin dispersal that’s considerably different than our previous models. This isn’t the first time old tools have been found outside Africa — there are stone tools in Georgia (the country, not the state) dated to 1.8M years ago, and sites in Pakistan and China dating to 1.8M and 1.66M years ago, respectively. But the question of when our ancestors spread to these locations is also tied to which of our ancestors spread to these locations, and that’s where things start to get rather interesting. Homo erectus, which arose 1.8M years ago, is known for having settled Africa, the Middle East, India, Pakistan, China, and the Indonesian islands. H. erectus is believed to have been the first hominin to have spread so widely out of Africa — earlier species of hominins are generally confined to various sections of Africa, radiating outwards from East Africa.

If there are hominins using tools in Africa 2.4M years ago, it means one of two things. Either the species of hominins living in East Africa at the time made a 2500-mile trip over difficult terrain with no particularly understood reason to do so, or there were unidentified groups of hominins living in North Africa at the same period of time who independently discovered Oldowan technology and began using tools. The earliest known hominin is 7 million years old and lived in Chad, illustrating that while East Africa may, in fact, have been the so-called cradle of humanity, various hominin species were spreading out across the continent millions of years before.

This potentially complicates the question of tool use. While tool use was once thought to be one of the most important differentiating factors between the genus Homo and Australopithecus, we’ve since discovered that some australopithecine species like Australopithecus garhi were likely tool users. Tool use has been observed in chimpanzee species as well. With no bones discovered at the Algerian site, we don’t yet know which of our ancestors, if any, had a hand in creating these ancient relics. What these finds show in aggregate is that our understanding of how hominins spread across Africa and the wider world remain incomplete.

Feature image courtesy of Wikipedia

Now Read: Researchers Claim Neanderthals Could Start Fires Using Stone Tools, 7.2-Million-Year-Old Jawbone Indicates Oldest Hominin Lived in Europe, Not Africa, and Dinosaur-Killing Impact May Have Superheated Earth’s Atmosphere for 100,000 Years

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

New Discovery Strengthens the Case for Elusive Planet 9

This site may earn affiliate commissions from the links on this page. Terms of use.

Years after its proposed detection, Planet 9 remains obstinately absent from our astronomical charts. Evidence for its existence is found only indirectly, in the orbits of a number of trans-Neptunian objects, or TNOs, which follow trajectories around the sun that collectively imply the presence of another, larger planet in the distant boundaries of the solar system.

The biggest problem with Planet 9 is that we haven’t found it. The biggest problem for those who want to argue it doesn’t exist is that we keep finding evidence it does. The most recent piece — rock? — of evidence is 2015 TG387, colloquially known as “The Goblin.” What makes the Goblin so interesting is what it doesn’t do: namely, interact with other planets in the solar system. It never comes close enough to Jupiter, Saturn, Uranus, or Neptune to be gravitationally influenced by them. Yet its orbit around the solar system shows that it’s clearly being influenced by something.

Planetary orbits

All of the “Giant planets” are located in the far right-hand side of the image. That’s how far away from us 2015 TG387 is.

“It never interacts with anything that we know of in the solar system,” says Scott Sheppard, an astronomer at the Carnegie Institution for Science and a co-discoverer of 2015 TG387. “Somehow, it had to get on this elongated orbit in the past, and that’s the big question: What did it interact with to get [there]?”

Mathematical simulations show only one real possibility — 2015 TG387 was shifted into its highly elongated orbit by interactions with a larger body, one that fits the assumed characteristics of our hypothetical Planet 9. Part of the problem with finding the actual planet is that bodies out this far from the Sun are exceedingly faint. The Goblin spends most of its time too far from Earth to be detected by telescopes and can only be seen when it’s on its closest approach to the Sun, something that only happens every 40,000 years. In other words, the only reason we found it is because it’s in the right place in its orbit to be found (at roughly 300km in diameter, Goblin is substantially smaller than Ceres).

There are critics of the Planet 9 theory, including those who believe that the collective gravity of these small objects may have nudged them into strange elliptical orbits, or that the entire issue is a sampling artifact from only examining a small portion of the sky. If we see these unusual elliptical orbits all over the solar system, it would mean they’re being caused by something else (nobody expects space to be full of invisible planets whizzing about).

It’s early in all cases. Planet 9 could simply be too far away from Earth to be observed at the moment, thanks to a combination of dim surface albedo, distance, and us not knowing where to look. Previous sky surveys have clarified a lot of objects that aren’t out there; we don’t think, for instance, that there’s any way a Jupiter-size planet could still be hiding anywhere nearby. But there are still gaps in our knowledge that could be cloaking a distant ice ball.

Now Read: We May Not Need ‘Planet 9’ to Explain Unusual Orbits in Outer Solar System, Almost Two Years Later, We Still Don’t Know if Planet Nine Exists, and Theoretical Planet 9 may be a rogue planet not native to our solar system

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

NASA Makes New Discovery With 22-Year-Old Galileo Data From Ganymede

This site may earn affiliate commissions from the links on this page. Terms of use.

Jupiter has a whopping 69 moons, and you don’t hear a lot about the largest of them. It may not have the icy sheets of Europa or the geysers of Enceladus, but Ganymede is remarkable in its own right. In fact, it may be even more interesting than we thought. NASA researchers have gone over telemetry from a 1996 Ganymede flyby in order to analyze some data that sat ignored all these years.

Ganymede is the largest moon of Jupiter, but if it were orbiting the sun, we’d undoubtedly label it as a planet (no Pluto uncertainty here). It’s the ninth largest object in the solar system if you count the sun — even bigger than the planet Mercury. Astronomers also think Ganymede has a subsurface ocean that could contain more water than all of Earth’s oceans. An object this large in orbit of Jupiter is bound to have some unusual properties, and now we’re finding out just how unusual.

On June 26, 1996, the Galileo spacecraft made its first of six flybys of Ganymede. This probe checked out several moons, as well as Jupiter itself. It confirmed that Ganymede has a magnetic field, which was a surprise to the scientific community 22 years ago. No one expected that a moon in orbit of a planet like Jupiter could have its own magnetosphere. On Earth, our magnetic field protects the surface from harsh solar radiation, and it’s unlikely life would exist on Earth without it. On Ganymede, the magnetic field is likely a result of the moon’s shifting liquid iron core.

Galileo includes an instrument called the Plasma Subsystem (PLS), which it used to measure the density, flow, and temperature of plasma in the Jovian system. Somehow, no one took a close look at the PLS data from Ganymede all these years, but now it’s published thanks to a study led by NASA’s Glyn Collinson. The nature of plasma (charged particles) around Ganymede can actually tell us a great deal about its magnetic field. 

Whereas the solar wind controls the shape and intensity of Earth’s magnetosphere, Ganymede is tucked away inside Jupiter’s magnetic field. Thus, the flow of plasma around Jupiter bends Ganymede’s field into an unusual shape. The study describes Ganymede’s magnetosphere as a flattened horn shape pointing in the direction of its orbit. The waves of plasma could also explain unusually bright auroras seen in Ganymede’s polar regions. These particles rain down at the poles, causing charged water molecules to shoot back up.

Scientists also suspect that Ganymede’s subsurface ocean could play a role in the generation of its magnetic field. That force may also interact with Jupiter’s magnetosphere in unknown ways. There’s still more analysis to do with the PLS data, but we should have much more information on Ganymede in a few years. The ESA plans to launch a mission called JUICE to explore Jupiter’s moons in 2022.

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

Discovery of Gravitational Waves Wins 3 US Scientists Nobel Prize

This site may earn affiliate commissions from the links on this page. Terms of use.

In 1916, the famed theoretical physicist Albert Einstein postulated certain events in the universe would produce gravitational waves. The detection of such waves would be even more confirmation of general relativity, but Einstein suspected the waves would be too faint to be detected on Earth. Now, 100 years later, three US scientists are sharing the Nobel Prize in Physics for detecting gravitational waves.

The Nobel Committee has awarded this year’s prize in physics to Rainer Weiss, Kip Thorne, and Barry Barish. Weiss gets half of the nine million Swedish kronor ($ 1.1 million) prize, and Thorne and Barish share the other half. The first detections came in 2015 (published in early 2016), but the Nobel Committee always waits at least a year to make an award. If it had made this award last year, Scottish physicist Ron Drever would most likely have shared the award with Thorne and Weiss, with whom he worked before his death in March of this year.

All three winners have been involved with the Laser Interferometer Gravitational-Wave Observatory (LIGO) project in some way. LIGO is composed of two facilities; one in Washington state and the other in Louisiana. A European station was added just this year in Italy. Weiss was awarded half of the prize for developing the strategy used at LIGO to make the gravitational wave detection. Meanwhile, Thorne did the theoretical work that pointed LIGO in the right direction. Barish was the second director of LIGO, beginning in 1994. He’s credited with spearheading the effort at LIGO that made detection possible.

Gravitational waves were the last major prediction from general relativity that remained unconfirmed before LIGO researchers made their announcement. According to relativity, movements of mass should cause ripples in the spacetime continuum. These “waves” would propagate outward at the speed of light, but the waves themselves were extremely faint. Thus, we could only hope to detect the largest events like the collision of two black holes.

gravity waves 3

LIGO team’s visualization of gravitational waves caused by two rapidly orbiting black holes in a binary system.

LIGO uses a technique called laser interferometry to detect gravitational waves. It bounces lasers off reflectors at the end of a 4-kilometer tube, then monitors the laser’s return for evidence of movement from gravitational waves. If there’s no alteration in the mirrors, the light returns unchanged and the beams cancel each other out. If a gravitational wave perturbs the system, the waves won’t cancel out. It can detect movements as small as a ten-thousandth the charge diameter of a proton. LIGO successfully detected waves emanating from a pair of black holes orbiting around each other as they prepared to merge. The paper was published in 2016 along with an audio recording of the wave.

This discovery has been hailed as a monumental achievement for science, one how recognized by the Nobel Committee. The work of these scientists not only confirms a 100-year-old theory–it opens up new avenues of study today and into the future.

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech