Nanodiamond-enhanced MRI offers greater range of diagnostic, therapeutic applications

“With this study, we showed we could produce biomedically relevant MR images using nanodiamonds as the source of contrast in the images and that we could switch the contrast on and off at will,” says David Waddington, lead author of the paper and a PhD student at the University of Sydney in Australia. Waddington is currently working with Matthew Rosen, PhD, in the Low-Field Imaging Laboratory at the Martinos Center. “With competing strategies, the nanodiamonds must be prepared externally and then injected into the body, where they can only be imaged for a few hours at most. However, as our technique is biocompatible, we can continue imaging for indefinite periods of time. This raises the possibility of tracking the delivery of nanodiamond-drug compounds for a variety of diseases and providing vital information on the efficacy of different treatment options.”

Waddington began this work three years ago as part of a Fulbright Scholarship awarded early in his graduate work at the University of Sydney, where he is a member of a team led by study co-author David Reilly, PhD, in the new Sydney Nanoscience Hub — the headquarters of the Australian Institute for Nanoscale Science and Technology, which launched last year. As part of the Reilly group, Waddington played a crucial role in early successes with nanodiamond imaging, including a 2015 paper in Nature Communications. He then sought to extend the potential of the approach by collaborating with Rosen at the Martinos Center and Ronald Walsworth, PhD, at Harvard University, also a co-author of the current study. Rosen’s group is a world leader in the area of ultra-low-field magnetic resonance imaging, a technique that proved essential to the development of in vivo nanodiamond imaging.

Innovative approaches to improve personalized radiation therapy for head and neck cancer patients

“Just as every person is different, every person’s cancer might be very different and therefore require a very different course of radiotherapy. Yet, clinicians have been unable to make the distinction required to biologically personalize radiotherapy treatment,” explained Louis B. Harrison, MD, chair of the Radiation Oncology Department at Moffitt

Moffitt researchers have developed a test called the radiosensitivity index that determines how sensitive a patient’s tumor is to radiation therapy. The radiosensitivity index is based on the expression of different genes in a patient’s tumor and has been validated in 9 different patient groups across different tumor types. The researchers are able to use the radiosensitivity index within a mathematical framework to select the optimum radiotherapy dose for each patient based on their individual tumor biology. The researchers are in the process of developing the first clinical trial to test their genomic-adjusted radiation dose in patients with squamous cell carcinoma of the oropharynx.

“We propose to explore how we can move away from a one-size-fits-all approach to radiotherapy treatment for patients with head and neck cancer, and to develop evidence with which to guide personalization and biological adaptation of radiotherapy to improve outcomes and reduce toxicity,” said Jimmy J. Caudell, MD, PhD, associate member and section head of Head and Neck Radiation Oncology at Moffitt.

Researchers are also making advances in the field of radiomics — the use of MRI, CT and PET scans to characterize tumors by imaging features not fully captured by the naked eye. Radiomics models might inform physicians not just about anatomy, but cellular and gene features which may impact treatment and prognosis. “Biopsies are limited by the fact that they are acquired at a single timepoint and from a single anatomical location. Radiomics might be able to provide enough information for a virtual 3D biopsy where the entire tumor can be sampled non-invasively and repeatedly,” said Harrison.

The researchers also believe that integrated mathematical oncology offers a unique approach to develop more personalized radiation therapy. Integrated mathematical oncology uses experimental and clinical data to build models to predict a patient’s response to radiation treatment. This allows scientists to conduct experiments that would be impossible with common laboratory techniques.

The researchers hope that the advances being made in precision medicine at Moffitt will continue to innovate the field and bring clinicians closer to a personalized treatment approach with radiation therapy, while simultaneously improving patient outcomes and reducing treatment complications.

New material could save time and money in medical imaging and environmental remediation

“A company with an abandoned chemical plant that has barrels of unlabeled solvents or a public utility concerned its water supply has been contaminated today face a cumbersome process of identifying the chemicals before they can start clean-up,” said Simon Humphrey, associate professor of chemistry who led the research. “It’s costly and can take two or three days. We can now do that with a rapid, on-site method — and that difference could improve people’s health and reduce pollution a lot more efficiently.”

Humphrey envisions disposable paper dipsticks coated with the new material. A user would dip one into an uncharacterized substance and stick it into an ultraviolet (UV) reader. Based on the colors of light emitted, the device would indicate what components, such as organic solvents, fluoride, mercury and heavy metals, are in the substance.

The material, called PCM-22 and described in a paper published today in the journal Chem, is a crystal made of lanthanide ions and triphenylphosphine. When a chemical bonds to the material and a UV light shines on it, the material emits specific colors of visible light. Each chemical produces a unique eight-factor signature of color and brightness that can be used to identify and quantify it in an uncharacterized sample.

Once scientists calibrate the sensor on known samples to create a catalog of fingerprints that can be used to identify the components of uncharacterized samples, the dipstick-type sensors would be relatively simple to produce, Humphrey said. He and UT Austin share joint patents on the sensor material and on the process of analyzing results, and UT Austin’s Office of Technology Commercialization has already begun work to license the technology to companies.

Two-photon imaging of Meissner’s corpuscle mechanoreceptors in living tissue

Researchers at the Nagoya Institute of Technology (NITech) and Nagoya University (NU) have recently developed an in vivo imaging method to observe MCs in living skin. Not only could this imaging method unlock the mechanism of mechanoreceptor function, but could also be used as a novel diagnostic tool for neural diseases, and accelerate the study of aging-related neurodegeneration.

Previous studies, which involved observing MCs in cut sections of fixed tissues, have described the morphology and physiological functions of MCs, but the mechanism of mechanical transduction by MCs in living tissue remains unknown. The NITech-NU research team has now opened a window to understanding the mechanism of mechanoreceptor function by using two-photon microscopy to observe MCs in action, in situ in the fingertips of live mice.

Two-photon microscopy is a fluorescence-based technique that allows the imaging of living tissue, up to a depth of one millimeter, with high resolution and low phototoxicity. Using two-photon imaging, the NITech-NU scientists are the first to observe mechanoreceptors in vivo in non-transgenic tissue. “To visualize MCs, we used a nontoxic and long-lived fluorescent lipophilic dye that allows for extended time-lapse observation in the same individual,” says Pham Quang Trung, a PhD Student of NITech, the leading author of this study. “The fluorescent dye persisted in injected mice for at least 5 weeks, and we successfully imaged the same MCs in a mouse paw three times over five days.”

Scientists utilize innovative neuroimaging approach to unravel complex brain networks

The team has successfully manipulated two pioneering technologies: optogenetics and functional magnetic resonance imaging (fMRI), for investigation of the dynamics underlying brain activity propagation. Their breakthrough to simultaneously capture large-scale brain-wide neural activity propagation and interaction dynamics, while examining their functional roles has taken scientists a step further in unravelling the mysteries of the brain. It could lead to the development of new neurotechnologies for early diagnosis and intervention of brain diseases including autism, Alzheimer’s disease or dementia.

The findings have recently been published in the international academic journal Proceedings of the National Academy of Sciences of the United States of America (PNAS).

The human brain is the source of our thoughts, emotions, perceptions, actions, and memories. How the brain actually works, however, remains largely unknown. One grand challenge for the 21st century neuroscience is to achieve an integrated understanding of the large-scale brain-wide interactions, particularly the patterns of neural activities that give rise to functions and behavior.

In 2013, the Obama government in the US launched the BRAIN Initiative to “accelerate the development and application of new technologies that will enable researchers to produce dynamic pictures of the brain that show how individual brain cells and complex neural circuits interact at the speed of thought.” In November 2016, China launched its own initiative “China Brain Project,” which aims to advance basic research on the neural circuit mechanisms underlying cognition in hopes to improve brain disease diagnosis/intervention and inspire development of brain-machine intelligence technology.

Wood filter removes toxic dye from water

The team started with a block of linden wood, which they then soaked in palladium — a metal used in cars’ catalytic converters to remove pollutants from the exhaust. In this new filter, the palladium bonds to particles of dye. The wood’s natural channels, that once moved water and nutrients between the leaves and roots, now allow the water to flow past the nanoparticles for efficient removal of the toxic dye particles. The water, tinted with methylene blue, slowly drips through the wood and comes out clear.

“This could be used in areas where wastewater contains toxic dye particles,” said Amy Gong, a materials science graduate student, and co-first author of the research paper.

The purpose of the study was to analyze wood via an engineering lens. The researchers did not compare the filter to other types of filters; rather, they wanted to prove that wood can be used to remove impurities.

“We are currently working on using a wood filter to remove heavy metals, such as lead and copper, from water,’ said Liangbing Hu, the lead researcher on the project. “We are also interested in scaling up the technology for real industry applications.” Hu is a professor of materials science and a member of the University of Maryland’s Energy Research Center.

“We found that the wood’s channels are actually slightly bent, and they are connected by pores, which slightly increase the time that the water is in contact with the wood,” said Siddhartha Das, professor of mechanical engineering. His team helped Hu’s study the flow of water through the wood.

The research, which was published March 31, 2017, in the journal ACS Nano, is the latest innovative use of wood by the UMD team. They previously made a battery and a supercapacitor out of wood; a battery from a leaf; and made wood transparent then used it for windows.

VIDEO: Wood filter removes toxic dye from water https://youtu.be/o8H8YxRP1Dw

Nanodiamond-enhanced MRI offers greater range of diagnostic, therapeutic applications

“With this study, we showed we could produce biomedically relevant MR images using nanodiamonds as the source of contrast in the images and that we could switch the contrast on and off at will,” says David Waddington, lead author of the paper and a PhD student at the University of Sydney in Australia. Waddington is currently working with Matthew Rosen, PhD, in the Low-Field Imaging Laboratory at the Martinos Center. “With competing strategies, the nanodiamonds must be prepared externally and then injected into the body, where they can only be imaged for a few hours at most. However, as our technique is biocompatible, we can continue imaging for indefinite periods of time. This raises the possibility of tracking the delivery of nanodiamond-drug compounds for a variety of diseases and providing vital information on the efficacy of different treatment options.”

Waddington began this work three years ago as part of a Fulbright Scholarship awarded early in his graduate work at the University of Sydney, where he is a member of a team led by study co-author David Reilly, PhD, in the new Sydney Nanoscience Hub — the headquarters of the Australian Institute for Nanoscale Science and Technology, which launched last year. As part of the Reilly group, Waddington played a crucial role in early successes with nanodiamond imaging, including a 2015 paper in Nature Communications. He then sought to extend the potential of the approach by collaborating with Rosen at the Martinos Center and Ronald Walsworth, PhD, at Harvard University, also a co-author of the current study. Rosen’s group is a world leader in the area of ultra-low-field magnetic resonance imaging, a technique that proved essential to the development of in vivo nanodiamond imaging.

Let there be light: Controlled creation of quantum emitter arrays

Quantum light emitters, or quantum dots, are of interest for many different applications, including quantum communication and networks. Until now, it has been very difficult to produce large arrays of quantum emitters close together while keeping the high quality of the quantum light sources. “It’s almost a Goldilocks problem — it seems like one either obtains good single photon sources, or good arrays but not both at the same time. Now, all of a sudden, we can have hundreds of these emitters in one sample,” said Mete Atatüre, a professor at the Cavendish Laboratory of the University of Cambridge.

The random occurrences of quantum dots in TMD made systematic investigation difficult. “The ability to deterministically create our sources has made a dramatic change in the way we do our day-to-day research. Previously it was pure luck, and we had to keep our spirits high even if we didn’t succeed. Now, we can do research in a more systematic way,” said Atatüre. Not only does this new method make performing research more straightforward, but it also leads to improvements in the emitters themselves: “The quality of the emitters that we create on purpose seems to be better than the natural quantum dots.”

Dhiren Kara, a researcher at the Cavendish Laboratory, said “There is lots of mystery surrounding these emitters, in how they originate and how they work. Now, one can directly create the emitters and not have to worry about waiting for them to appear randomly. In that sense, it speeds up a lot of the science.”

To create the quantum light sources, the researchers cut an array of nanoscale pillars into silica or nanodiamond, and then suspended the few-atom-thick TMD layer on top of the pillars. The quantum emitters are then created in the TMD where it is supported by the pillars, so it is possible to choose exactly where the single photons should be generated. “The fact that the emitters are generated in a mechanical way is good, because it means that they are quite robust, and material independent,” said Carmen Palacios-Berraquero, a researcher at the Cavendish Laboratory and first author of the work.

The deterministic and robust generation of quantum sources means new opportunities for hybrid structures of photonic and electronic functions layered together. The quantum arrays are fully scalable and compatible with silicon chip fabrication.

Andrea Ferrari, Science and Technology Officer and Chair of the Management Panel of the Graphene Flagship, was also involved in the research. He added “Quantum technologies are recognized as key investment areas for Europe, with a new Quantum Flagship recently announced. It is great to see that layered materials have now a firm place amongst the promising approaches for generation and manipulation of quantum light and could be enablers of a future integrated technology.”

Unexpected damage found rippling through promising exotic nanomaterials

Now, scientists have developed a new method to probe three-dimensional, atomic-scale intricacies and chemical compositions with unprecedented precision. The breakthrough technique — described February 6 in the journal Nano Letters — combines atomic-force microscopy with near-field spectroscopy to expose the surprising damage wreaked by even the most subtle forces.

“This is like granting sight to the blind,” said lead author Adrian Gozar of Yale University. “We can finally see the all-important variations that dictate functionality at this scale and better explore both cutting-edge electronics and fundamental questions that have persisted for decades.”

Scientists from Yale University, Harvard University, and the U.S. Department of Energy’s Brookhaven National Laboratory developed the technique to determine why a particular device fabrication technique — helium-ion beam lithography — failed to create the scalable, high-performing superconducting nanowires predicted by both theory and simulation.

In previous work, heavy ion beams were used to carve 10-nm-wide channels — some 10,000 times thinner than a human hair — through custom-made materials. However, the new study revealed beam-induced damage rippling out over 50 times that distance. At this scale, that difference was both imperceptible and functionally catastrophic.

“This directly addresses the challenge of quantum computing, for example, where companies including IBM and Google are exploring superconducting nanowires but need reliable synthesis and characterization,” said study coauthor and Brookhaven Lab physicist Ivan Bozovic.

Toward mass-producible quantum computers

But practical, diamond-based quantum computing devices will require the ability to position those defects at precise locations in complex diamond structures, where the defects can function as qubits, the basic units of information in quantum computing. In Nature Communications, a team of researchers from MIT, Harvard University, and Sandia National Laboratories reports a new technique for creating targeted defects, which is simpler and more precise than its predecessors.

In experiments, the defects produced by the technique were, on average, within 50 nanometers of their ideal locations.

“The dream scenario in quantum information processing is to make an optical circuit to shuttle photonic qubits and then position a quantum memory wherever you need it,” says Dirk Englund, an associate professor of electrical engineering and computer science who led the MIT team. “We’re almost there with this. These emitters are almost perfect.”

The new paper has 15 co-authors. Seven are from MIT, including Englund and first author Tim Schröder, who was a postdoc in Englund’s lab when the work was done and is now an assistant professor at the University of Copenhagen’s Niels Bohr Institute. Edward Bielejec led the Sandia team, and physics professor Mikhail Lukin led the Harvard team.

Appealing defects

Quantum computers, which are still largely hypothetical, exploit the phenomenon of quantum “superposition,” or the counterintuitive ability of small particles to inhabit contradictory physical states at the same time. An electron, for instance, can be said to be in more than one location simultaneously, or to have both of two opposed magnetic orientations.