One of the most important foundations of the modern Life Sciences is being able to cultivate cells outside the body and to observe them with optical microscopes. In this way, cellular processes can be analysed in much more quantitative detail than in the body. However, at the same time a problem arises. "Anyone who has ever observed biological cells under a microscope knows how unpredictable their behaviour can be. When they are on a traditional culture dish they lack 'orientation', unlike in their natural environment in the body. That is why, regarding certain research issues, it is difficult to derive any regularities from their shape and movement," explains Prof. Schwarz. In order to learn more about the natural behaviour of cells, the researchers therefore resort to methods from materials science. The substrate for microscopic study is structured in such a way that it normalises cell behaviour. The Heidelberg physicists explain that with certain printing techniques, proteins are deposited on the substrate in geometrically well-defined areas. The cell behaviour can then be observed and evaluated with the usual microscopy techniques.
The group of Ulrich Schwarz aims at describing in mathematical terms the behaviour of biological cells on micropatterned substrates. Such models should make it possible to quantitatively predict cell behaviour for a wide range of experimental setups. For that purpose, Philipp Albert has developed a complicated computer programme which considers the essential properties of individual cells and their interaction. It can also predict how large collections of cells behave on the given geometric structures. He explains: "Surprising new patterns often emerge from the interplay of several cells, such as streams, swirls and bridges. As in physical systems, e.g. fluids, the whole is here more than the sum of its parts. Our software package can calculate such behaviour very rapidly. " Dr Albert's computer simulations show, for example, how skin cell ensembles can overcome gaps in a wound model up to about 200 micrometres.
Another promising application of these advances is investigated by Dr. Holger Erfle and his research group at the BioQuant Centre, namely high throughput screening of cells. Robot-controlled equipment is used to carry out automatic pharmacological or genetic tests with many different active substances. They are, for example, designed to identify new medications against viruses or for cancer treatment. The new software now enables the scientists to predict what geometries are best suited for a certain cell type. The software can also show the significance of changes in cell behaviour observed under the microscope.
The research projects by Prof. Schwarz, Dr. Albert and Dr. Erfle received European Union funding from 2011 to 2015 via the program "Micropattern-Enhanced High Throughput RNA Interference for Cell Screening" (MEHTRICS). Besides the BioQuant Centre, this consortium included research groups from Dresden, France, Switzerland and Lithuania. The total support for the projects amounted to EUR 4.4 million euros.
If you sign up for a course today, look out for a nice surprise. For a limited but unspecified time Atlassian's offering to subsidize. The offer extends to all Computer Science and today's a day there's 50% off at the checkout.
That can just about be sung to the tune of the Teddy Bear's picnic. It's just too good an offer not to make a song and dance about and if you don't know about it is really well hidden - you have to get to the final stage of the checkout before you see this "Use Offer" button.
Why is Atlassian being so generous? According to the promo sent to Coursera students:
Atlassian, creator of leading software products JIRA, Bitbucket, Confluence, and Hipchat, is collaborating with Coursera to grow the community of software developers by helping motivated learners get the technical skills they need.
However, Cousera has clarified that:
All courses in our computer science category except for a handful (~10) of our geospatial and bioinformatics courses are eligible. The Atlassian scholarship can be applied to complete Specializations, but it replaces the pre-pay discount.
So this gives around 220 courses included in the offer and if you, for example, use the Pre-Pay option covering all of the 5-course Scala Specialization, you'll pay around $200 rather than $400. For Data Structures and Algorithms, which has six courses you'll save even more - but you won't get the normal Pre-Pay discount as well.
When providing an overview of any Coursera MOOC I always point out that it is possible to audit them for free, but this misses out not only the certificate but also "graded items" - so not only can't you prove to other people that you've done the course, you can't even have you knowledge tested. So this 50% off should be a big incentive. It is however well hidden and you don''t even get to hear about it when you make the choice between Purchase or Audit.
It is only once you've decided on the paying option and continued to the checkout that you see the offer and have the option of using it.
So hurry down to the woods - no to Coursera - while the offer is still available.
To be informed about new articles on I Programmer, sign up for our weekly newsletter,subscribe to the RSS feed and follow us on, Twitter, Facebook, Google+ or Linkedin.
It's been nearly half a year since the debut of the Honor 5X and it finally looks as though the handset will receive an update to Android Marshmallow. The update will offer the standard enhancements that come with Marshmallow - like Doze - and will also bump EMUI to version 4.0.
While it might look like the Honor 5X is late to the party, as of June, only 10% of Android handsets have made the jump to Marshmallow. The latest version of Android made its debut in October of 2015, arriving on Nexus devices first, and slowly trickling its way to other handsets over the past eight months. Although the number has been slowly rising, Google has already announced the next version, Android N - which Nexus device owners are already currently previewing - which will be making its debut sometime in the fall.
The update for the Honor 5X should arrive OTA or you can check Honor's proprietary update application. If impatient, users can download the update and manually install it, but be cautious as installations can erase data or could potentially damage your device.
Source: Honor via MobiPicker
In future, many everyday items will be connected to the Internet and, consequently, become targets of attackers. As all devices run different types of software, supplying protection mechanisms that work for all poses a significant challenge.
This is the objective pursued by the Bochum-based project "Leveraging Binary Analysis to Secure the Internet of Things," short Bastion, funded by the European Research Council.
A shared language for all processors
As more often than not, the software running on a device remains the manufacturer's corporate secret, researchers at the Chair for System Security at Ruhr-Universität Bochum do not analyse the original source code, but the binary code of zeros and ones that they can read directly from a device.
However, different devices are equipped with processors with different complexities: while an Intel processor in a computer understands more than 500 commands, a microcontroller in an electronic key is able to process merely 20 commands. An additional problem is that one and the same instruction, for example "add two numbers," is represented as different sequences of zeros and ones in the binary language of two processor types. This renders an automated analysis of many different devices difficult.
In order to perform processor-independent security analyses, Thorsten Holz' team translates the different binary languages into a so called intermediate language. The researchers have already successfully implemented this approach for three processor types named Intel, ARM and MIPS.
Closing security gaps automatically
The researchers then look for security-critical programming errors on the intermediate language level. They intend to automatically close the gaps thus detected. This does not yet work for any software. However, the team has already demonstrated that the method is sound in principle: in 2015, the IT experts identified a security gap in the Internet Explorer and succeeded in closing it automatically.
The method is expected to be completely processor-independent by the time the project is wrapped up in 2020. Integrating protection mechanisms is supposed to work for many different devices, too.
Helping faster than the manufacturers
"Sometimes, it can take a while until security gaps in a device are noticed and fixed by the manufacturers," says Thorsten Holz. This is where the methods developed by his group can help. They protect users from attacks even if security gaps had not yet been officially closed.
The software was developed as part of a Natural Environments Research Council (NERC) Innovation Project awarded to Professor Matthew Bennett and Dr Marcin Budka in 2015 for forensic footprint analysis. They have been developing techniques to enable modern footwear evidence to be captured in three-dimensions and analysed digitally to improve crime scene practice.
Footprints reveal much about the individuals who made them; their body mass, height and their walking speed. "Footprints contain information about the way our ancestors moved," explains Professor Bennett. "The tracks at Laetoli are the oldest in the world and show a line of footprints from our early ancestors, preserved in volcanic ash. They provide a fascinating insight into how early humans walked. The techniques we have been developing for use at modern crime scenes can also reveal something new about these ancient track sites. "
The Laetoli tracks were discovered by Mary Leakey in 1976 and are thought to be around 3.6 million years old. There are two parallel trackways on the site, where two ancient hominins walked across the surface. One of these trackways was obscured when a third person followed the same path. The merged trackway has largely been ignored by scientists over the last 40 years and the fierce debate about the walking style of the track-makers has predominately focused on the undisturbed trackway.
By using the software developed through the NERC Innovation Project Professor Bennett and his colleagues have been able to decouple the tracks of this merged trail and reveal for the first time the shape of the tracks left by this mysterious third track-maker. There is also an intriguing hint of a fourth track-maker at the site.
"We're really pleased that we can use our techniques to capture new data from these extremely old footprints," says Dr Marcin Budka who developed the software used in the study.
"It means that we have effectively doubled the information that the palaeo-anthropological community has available for study of these hominin track-makers," continues Dr Reynolds one of the co-authors of the study.
"As well as making new discoveries about our early ancestors, we can apply this science to help modern society combat crime. By digitising tracks at a crime scene we can preserve, share and study this evidence more easily," says Sarita Morse who helped conceive the original analysis.
For more information, please see the following video: https://www.youtube.com/watch?v=Rl8odSqoDZc
To do this, software programs in these systems calculate predictive relationships from massive amounts of data. The systems identify these predictive relationships using advanced algorithms -- a set of rules for solving math problems -- and "training data. " This data is then used to construct the models and features that enable a system to determine the latest best-seller you wish to read or to predict the likelihood of rain next week.
This intricate process means that a piece of raw data often goes through a series of computations in a system. The computations and information derived by the system from that data together form a complex propagation network called the data's "lineage. " The term was coined by Yinzhi Cao, an assistant professor of computer science and engineering, and his colleague, Junfeng Yang of Columbia University, who are pioneering a novel approach to make learning systems forget.
Considering how important this concept is to increasing security and protecting privacy, Cao and Yang believe that easy adoption of forgetting systems will be increasingly in demand. The two researchers have developed a way to do it faster and more effectively than can be done using current methods.
Their concept, called "machine unlearning," is so promising that Cao and Yang have been awarded a four-year, $1.2 million National Science Foundation grant to develop the approach.
"Effective forgetting systems must be able to let users specify the data to forget with different levels of granularity," said Cao, a principal investigator on the project. "These systems must remove the data and undo its effects so that all future operations run as if the data never existed. "
Increasing security and privacy protection
There are a number of reasons why an individual user or service provider might want a system to forget data and its complete lineage. Privacy is one.
Naturally, users unhappy with these newfound risks want their data, and its influence on the models and statistics, to be completely forgotten.
Security is another reason. Consider anomaly-based intrusion detection systems used to detect malicious software. In order to positively identify an attack, the system must be taught to recognize normal system activity. Therefore the security of these systems hinges on the model of normal behaviors extracted from the training data. By polluting the training data, attackers pollute the model and compromise security. Once the polluted data is identified, the system must completely forget the data and its lineage in order to regain security.
Widely used learning systems such as Google Search are, for the most part, only able to forget a user's raw data -- and not the data's lineage -- upon request. This is problematic for users who wish to ensure that any trace of unwanted data is removed completely, and it is also a challenge for service providers who have strong incentives to fulfill data removal requests and retain customer trust.
Service providers will increasingly need to be able to remove data and its lineage completely to comply with laws governing user data privacy, such as the "right to be forgotten" ruling issued in 2014 by the European Union's top court. In October 2014, Google removed more than 170,000 links to comply with the ruling, which affirmed users' right to control what appears when their names are searched. In July 2015, Google said it had received more than a quarter-million such requests.
Breaking down dependencies
Building on work that was presented at a 2015 IEEE Symposium and then published, Cao and Yang's "machine unlearning" method is based on the fact that most learning systems can be converted into a form that can be updated incrementally without costly retraining from scratch.
Their approach introduces a layer of a small number of summations between the learning algorithm and the training data to eliminate dependency on each other. So, the learning algorithms depend only on the summations and not on individual data. Using this method, unlearning a piece of data and its lineage no longer requires rebuilding the models and features that predict relationships between pieces of data. Simply recomputing a small number of summations would remove the data and its lineage completely -- and much faster than through retraining the system from scratch.
Cao believes he and Yang are the first to establish the connection between unlearning and the summation form.
The success of these initial evaluations has set the stage for the next phases of the project, which include adapting the technique to other systems and creating verifiable machine unlearning to statistically test whether unlearning has indeed repaired a system or completely wiped out unwanted data.
In their paper's introduction, Cao and Yang say that "machine unlearning" could play a key role in enhancing security and privacy and in our economic future:
"We foresee easy adoption of forgetting systems because they benefit both users and service providers. With the flexibility to request that systems forget data, users have more control over their data, so they are more willing to share data with the systems. More data also benefit the service providers, because they have more profit opportunities and fewer legal risks.
"We envision forgetting systems playing a crucial role in emerging data markets where users trade data for money, services, or other data because the mechanism of forgetting enables a user to cleanly cancel a data transaction or rent out the use rights of her data without giving up the ownership. "
Data-mining experts from the University of Maryland and Virginia Tech recently co-authored a book that ranked the vulnerability of 44 nations to cyberattacks. Lead author V. S. Subrahmanian discussed this research on Wednesday, March 9 at a panel discussion hosted by the Foundation for Defense of Democracies in Washington, D. C.
The United States ranked 11th safest, while several Scandinavian countries (Denmark, Norway and Finland) ranked the safest. China, India, Russia, Saudi Arabia and South Korea ranked among the most vulnerable.
"Our goal was to characterize how vulnerable different countries were, identify their current cybersecurity policies and determine how those policies might need to change in response to this new information," said Subrahmanian, a UMD professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS).
The book's authors conducted a two-year study that analyzed more than 20 billion automatically generated reports, collected from 4 million machines per year worldwide. The researchers based their rankings, in part, on the number of machines attacked in a given country and the number of times each machine was attacked.
Machines using Symantec anti-virus software automatically generated these reports, but only when a machine's user opted in to provide the data.
Trojans, followed by viruses and worms, posed the principal threats to machines in the United States. However, misleading software (i.e., fake anti-virus programs and disk cleanup utilities) is far more prevalent in the U. S. compared with other nations that have a similar gross domestic product. These results suggest that U. S. efforts to reduce cyberthreats should focus on education to recognize and avoid misleading software.
In a foreword to the book, Isaac Ben-Israel, chair of the Israeli Space Agency and former head of that nation's National Cyber Bureau, wrote: "People--even experts--often have gross misconceptions about the relative vulnerability [to cyber attack] of certain countries. The authors of this book succeed in empirically refuting many of those wrong beliefs. "
The book's findings include economic and educational data gathered by UMD's Center for Digital International Government, for which Subrahmanian serves as director. The researchers integrated all of the data to help shape specific policy recommendations for each of the countries studied, including strategic investments in education, research and public-private partnerships.
Subrahmanian's co-authors on the book are Michael Ovelgönne, a former UMIACS postdoctoral researcher; Tudor Dumitras, an assistant professor of electrical and computer engineering in the Maryland Cybersecurity Center; and B. Aditya Prakash, an assistant professor of computer science at Virginia Tech.
A related research paper on forecasting the spread of malware in 40 countries--containing much of the same data used for the book--was presented at the 9th ACM International Conference of Web Search and Data Mining in February 2016.
Another paper, accepted for publication in the journal ACM Transactions on Intelligent Systems and Technology , looked at the human aspect of cyberattacks--for example, why some people's online behavior makes them more vulnerable to malware that masquerades as legitimate software.
The book, "The Global Cyber Vulnerability Report," V. S. Subrahmanian, Michael Ovelgonne, Tudor Dumitras and B. Aditya Prakash, was published by Springer in December 2015.
The research paper, "Ensemble Models for Data-Driven Prediction of Malware Infections," C. Kang, N. Park, B. A. Prakash, E. Serra, and V. S. Subrahmanian, appears in Proceedings of the 9th ACM International Conf. on Web Science and Data Mining (WSDM 2016), San Francisco, February 2016.
The research paper, "Understanding the Relationship between Human Behavior and Susceptibility to Cyber-Attacks: A Data-Driven Approach," M. Ovelgönne, T. Dumitras, A. Prakash, V. S. Subrahmanian, and B. Wang, was accepted for publication in ACM Transactions on Intelligent Systems & Technology in February 2016.
"We found a new way to see a difference between the quantum universe and a classical one, using nothing more complex than a compression program," says Dagomir Kaszlikowski, a Principal Investigator at the Centre for Quantum Technologies (CQT) at the National University of Singapore.
Kaszlikowski worked with other researchers from CQT and collaborators at the Jagiellonian University and Adam Mickiewicz University in Poland to show that compression software, applied to experimental data, can reveal when a system crosses the boundary of our classical picture of the Universe into the quantum realm. The work is published in the March issue of New Journal of Physics.
In particular, the technique detects evidence of quantum entanglement between two particles. Entangled particles coordinate their behaviour in ways that cannot be explained by signals sent between them or properties decided in advance. This phenomenon has shown up in many experiments already, but the new approach does without an assumption that is usually made in the measurements.
"It may sound trivial to weaken an assumption, but this one is at the core of how we think about quantum physics," says co-author Christian Kurtsiefer at CQT. The relaxed assumption is that particles measured in an experiment are independent and identically distributed -- or i.i.d.
Experiments are typically performed on pairs of entangled particles, such as pairs of photons. Measure one of the light particles and you get results that seems random. The photon may have a 50:50 chance of having a polarization that points up or down, for example. The entanglement shows up when you measure the other photon of the pair: you'll get a matching result.
A mathematical relation known as Bell's theorem shows that quantum physics allows matching results with greater probability than is possible with classical physics. This is what previous experiments have tested. But the theorem is derived for just one pair of particles, whereas scientists must work out the probabilities statistically, by measuring many pairs. The situations are equivalent only as long as each particle-pair is identical and independent of every other one -- the i.i.d. assumption.
With the new technique, the measurements are carried out the same way but the results are analyzed differently. Instead of converting the results into probabilities, the raw data (in the forms of lists of 1s and 0s) is used directly as input into compression software.
Compression algorithms work by identifying patterns in the data and encoding them in a more efficient way. When applied to data from the experiment, they effectively detect the correlations resulting from quantum entanglement.
In the theoretical part of the work, Kaszlikowski and his collaborators worked out a relation akin to Bell's theorem that's based on the 'normalized compression difference' between subsets of the data. If the universe is classical, this quantity must stay less than zero. Quantum physics, they predicted, would allow it to reach 0.24. The theorists teamed up with Kurtsiefer's experimental group to test the idea.
First the team collected data from measurements on thousands of entangled photons. Then they used an open-source compression algorithm known as the Lempel-Ziv-Markov chain algorithm (used in the popular 7-zip archiver) to calculate the normalized compression differences. They find a value exceeding zero -- 0.0494 ± 0.0076 -- proving their system had crossed the classical-quantum boundary. The value is less than the maximum predicted because the compression does not reach the theoretical limit and the quantum states cannot be generated and detected perfectly.
It's not yet clear whether the new technique will find practical applications, but the researchers see their 'algorithmic' approach to the problem fitting into a bigger picture of how to think about physics. They derived their relation by considering correlations between particles produced by an algorithm fed to two computing machines.
"There is a trend to look at physical systems and processes as programs run on a computer made of the constituents of our universe," write the authors. This work presents an "explicit, experimentally testable example. "
"Many people like singing but they lack the skills to do so," says Minghui Dong, the project leader at A*STAR's Institute for Infocomm Research (I2R). "We want to use our technology to help the average person sing well. "
Speech consists of three key elements: content, prosody and timbre. Content is conveyed using words; prosody, or melody in the case of singing, is expressed through rhythm and pitch; and timbre is the distinctive quality that makes a banjo sound different from a trumpet and one singer's voice different from another's. I2R Speech2Singing works by polishing melody while retaining the original content and timbre of a sound.
Existing technologies that focus on correcting melody try to align off-tune sounds to the closest note on the musical scale or to the exact note in the original score. The former works well for professional singers who may be only slightly out of tune but cannot fix those who are singing drastically off-key or simply reading out loud. The latter is better at correcting discordant tunes but ignores many other aspects of melody such as vibrato and vowel stretching.
I2R Speech2Singing uses recordings by professional singers as templates to correct the melody of a singing voice or to convert a speaking voice into a singing one. The software detects the timing of each phonetic sound using speech recognition technology and then stretches or compresses the duration of the signal using voice conversion technology to match the rhythm to that of a professional singer. A speech synthesizer then combines the time-corrected voice with pitch data and background music to produce a beautiful solo.
"When we compared the output with other currently available applications, we realized that our software generated a much better voice quality," says Dr Dong.
Singaporeans were first introduced to the software in 2013 through "Sing for Singapore," part of the official mobile app of National Day Parade 2013. And in 2014, I2R Speech2Singing won the award for best Show & Tell contribution at INTERSPEECH, a major global venue for research on the science and technology of speech communication.
Dr Dong and his team are now developing a solution to quickly add songs into the software so that large-scale song databases can be easily built.
"Even reputable apps can lead users to websites hosting malicious content," said Yan Chen, professor of computer science at the Northwestern University McCormick School of Engineering. "No matter what app you use, you are not immune to malicious ads. "
Most people are accustomed to the ads they encounter when interacting with apps on mobile devices. Some pop up between stages in games while others sit quietly in the sidebars. Mostly harmless, ads are a source of income for developers who often offer their apps for free. But as more and more people own smartphones, the number of malicious ads hidden in apps is growing -- tripling in just the past year.
In order to curb attacks from hidden malicious ads, Chen and his team are working to better understand where these ads originate and how they operate. This research has resulted in a dynamic system for Android that detects malicious ads as well as locates and identifies the parties that intentionally or unintentionally allowed them to reach the end user.
Last year, Chen's team used its system to test about one million apps in two months. It found that while the percentage of malicious ads is actually quite small (0.1 percent), the absolute number is still large considering that 2 billion people own smartphones worldwide. Ads that ask the user to download a program are the most dangerous, containing malicious software about 50 percent of the time.
Ad networks could potentially use Chen's system to prevent malicious ads from sneaking into the ad exchange. Ad networks buy space in the app through developers, and then advertisers bid for that space to display their ads. Ad networks use sophisticated algorithms for targeting and inventory management, but there are no tools available to check the safety of each ad.
"It's very hard for the ad networks," Chen said. "They get millions of ads from different sources. Even if they had the resources to check each ad, those ads could change. "
The team will present their research, findings, and detection system on Feb. 22, 2016 at the 2016 Network and Distributed System Security Symposium in San Diego, California.
Chen's work culminated from the exploration of the little-studied interface between mobile apps and the Web. Many in-app advertisements take advantage of this interface: when users click on the advertisement within the app, they are led to an outside web page that hosts malicious content. Whether it is an offer to download fake anti-virus software or fake media players or claim free gifts, the content can take many forms to trick the user into downloading software that gathers sensitive information, sends unauthorized and often charged messages, or displays unwanted ads.
When Chen's detection software runs, it electronically clicks the ads within apps and follows a chain of links to the final landing page. It then downloads that page's code and completes an analysis to determine whether or not it's malicious. It also uses machine-learning techniques to track the evolving behaviors of malware as it attempts to elude detection.
Currently, Chen's team is testing ten-times more ads with the intention of building a more efficient system. He said their goal is to diagnose and detect malicious ads even faster. As people put more and more private information into their phones, attackers are motivated to pump more malicious ads into the market. Chen wants to give ad networks and users the tools to be ready.
"Attackers follow the money," Chen said. "More people are putting their credit card and banking information into their phones for mobile payment options. The smartphone has become a treasure for attackers, so they are investing heavily in compromising them. That means we will see more and more malicious ads and malware. "
The software, known as FireAnt (Filter, Identify, Report, and Export Analysis Tool), can speedily download, devour, and discard large collections of online data leaving relevant and important information for further investigation, all at the touch of a button.
Members of the University's Centre for Corpus Approaches to Social Science (CASS) led by Dr Claire Hardaker have produced this cutting-edge tool so that they can pinpoint offenders on busy social networks such as Twitter.
FireAnt was built as part of an international collaboration with corpus linguist and software expert Laurence Anthony, a professor at Waseda University, Japan and honorary research fellow at CASS.
While initially designed to download and handle data from Twitter, FireAnt can analyse texts from almost any online source, including sites such as Facebook and Google+.
"We have developed a software tool designed to enhance the signal and suppress the noise in large datasets," explains Dr Hardaker.
"It will allow the ordinary user to download Twitter data for their own analyses. Once this is collected, FireAnt then becomes an intelligent filter that discards unwanted messages and leaves behind data that can provide all-important answers. The software, which we offer as a free resource for those interested in undertaking linguistic analysis of online data, uses practical filters such as user-name, location, time, and content.
"The filtered information can then be presented as raw data, a time-series graph, a geographical map, or even a visualization of the network interactions. Users don't need to know any programming to use the tool -- everything can be done at the push of a button. "
FireAnt is designed to reduce potentially millions of messages down to a sample that contains only what the user wants to see, such as every tweet containing the word 'British', sent in the middle of the night, from users whose bio contains the word 'patriotic'.
Dr Hardaker, a lecturer in forensic corpus linguistics, began an Economic and Social Research Council-funded project researching abusive behaviour on Twitter in December 2013. The project quickly demonstrated that, while tackling anti-social online behaviour is of key importance, sites like Twitter produce data at such high volumes that simply trying to identify relevant messages amongst all the irrelevant ones is a huge challenge in itself.
Less than a year into the project, Dr Hardaker and her team were invited to Twitter's London headquarters to present project findings to the Crown Prosecution Service and Twitter itself. The research subsequently influenced Twitter to update its policy on abusive online behaviour.
The interest from the Crown Prosecution Service and the police encouraged Dr Hardaker to work with fellow corpus linguist, Professor Laurence Anthony to turn the research into a tool that could both collect online data, and then filter out the 'noise' from millions of messages, thereby enhancing the useful signals that can lead to the identification of accounts, texts, and behaviours of interest.
Dr Hardaker explained that the Government is trying to understand how social networks are involved in issues ranging from child-grooming and human-trafficking to fraud and radicalization. A key aspect of Dr Hardaker's work is a focus on the process of escalation from online messages that may start out as simply unpleasant or annoying, but that intensify to extreme, illegal behaviours that could even turn into physical, offline violence. In this respect, FireAnt can offer the opportunity to pinpoint high-risk individuals and networks that may go on to be a threat, whether to themselves or others.
Dr Claire Hardaker specialises in research into online aggression, manipulation and deception. She is currently working on projects that involve analysing live online social networks for the escalation of abusive behaviour, and the use of the Internet in transnational crime such as human trafficking and modern slavery.
FireAnt is free to download from: http://www.laurenceanthony.net/software/fireant
"It's the weird or unusual structure and behaviors of a material that makes it useful for a technological application," said Ames Laboratory Chief Research Officer Duane Johnson. "So the questions become: How do we find those unusual structures and behaviors? How do we understand exactly how they happen? Better yet, how do we control them so we can use them? "
The answer lies in fully understanding what scientists call solid-to-solid phase transformations, changes of a structure of one solid phase into another under stress, heat, magnetic field, or other fields. School kids learn, for example, that water (liquid phase) transforms when heated to steam (gas phase). But a solid, like a metallic alloy, can have various structures exhibiting order or disorder depending on changes in temperature and pressure, still remain a solid, and display key changes in properties like shape memory, magnetism, or energy conversion.
"Those solid-to-solid transformations are behind a lot of the special features we like and want in materials," explained Johnson, who heads up the project, called Mapping and Manipulating Materials Phase Transformation Pathways. "They are behind things that are already familiar to us, like the expandable stents used in heart surgery and bendable eyeglass frames; but they are also for uses we're still exploring, like energy-harvesting technologies and magnetic cooling. "
The computer codes are an advancement and adaptation of new and existing software, led in development by Johnson. One such code, called MECCA (Multiple-scattering Electronic-structure Code for Complex Alloys), is uniquely designed to tackle the complex problem of analyzing and predicting the atomic structural changes and behaviors of solids as they undergo phase transformations, and reveal why they do what they do to permit its control.
The program will assist and inform other ongoing materials research projects at Ames Laboratory, including ones with experimentalists on the hunt for new magnetic and high-entropy alloys, thermoelectrics, rare-earth magnets, and iron-arsenide superconductors.
"This theoretical method will become a key tool to guide the experimentalists to the compositions most likely to have unique capabilities, and to learn how to manipulate and control them for new applications," Johnson said.
Every state in the United States requires cancer cases to be reported to statewide cancer registries for disease tracking, identification of at-risk populations, and recognition of unusual trends or clusters. Typically, however, busy health care providers submit cancer reports to equally busy public health departments months into the course of a patient's treatment rather than at the time of initial diagnosis.
This information can be difficult for health officials to interpret, which can further delay health department action, when action is needed. The Regenstrief Institute and IU researchers have demonstrated that machine learning can greatly facilitate the process, by automatically and quickly extracting crucial meaning from plaintext, also known as free-text, pathology reports, and using them for decision-making.
"Towards Better Public Health Reporting Using Existing Off the Shelf Approaches: A Comparison of Alternative Cancer Detection Approaches Using Plaintext Medical Data and Non-dictionary Based Feature Selection" is published in the April 2016 issue of the Journal of Biomedical Informatics .
"We think that its no longer necessary for humans to spend time reviewing text reports to determine if cancer is present or not," said study senior author Shaun Grannis, M. D., M. S., interim director of the Regenstrief Center of Biomedical Informatics. "We have come to the point in time that technology can handle this. A human's time is better spent helping other humans by providing them with better clinical care. "
"A lot of the work that we will be doing in informatics in the next few years will be focused on how we can benefit from machine learning and artificial intelligence. Everything -- physician practices, health care systems, health information exchanges, insurers, as well as public health departments -- are awash in oceans of data. How can we hope to make sense of this deluge of data? Humans can't do it -- but computers can. "
Dr. Grannis, a Regenstrief Institute investigator and an associate professor of family medicine at the IU School of Medicine, is the architect of the Regenstrief syndromic surveillance detector for communicable diseases and led the technical implementation of Indiana's Public Health Emergency Surveillance System -- one of the nation's largest. Studies over the past decade have shown that this system detects outbreaks of communicable diseases seven to nine days earlier and finds four times as many cases as human reporting while providing more complete data.
"What's also interesting is that our efforts show significant potential for use in underserved nations, where a majority of clinical data is collected in the form of unstructured free text," said study first author Suranga N. Kasthurirathne, a doctoral student at School of Informatics and Computing at IUPUI. "Also, in addition to cancer detection, our approach can be adopted for a wide range of other conditions as well. "
The researchers sampled 7,000 free-text pathology reports from over 30 hospitals that participate in the Indiana Health Information Exchange and used open source tools, classification algorithms, and varying feature selection approaches to predict if a report was positive or negative for cancer. The results indicated that a fully automated review yielded results similar or better than those of trained human reviewers, saving both time and money.
"Machine learning can now support ideas and concepts that we have been aware of for decades, such as a basic understanding of medical terms," said Dr. Grannis. "We found that artificial intelligence was as least as accurate as humans in identifying cancer cases from free-text clinical data. For example the computer 'learned' that the word 'sheet' or 'sheets' signified cancer as 'sheet' or 'sheets of cells' are used in pathology reports to indicate malignancy.
"This is not an advance in ideas, it's a major infrastructure advance -- we have the technology, we have the data, we have the software from which we saw accurate, rapid review of vast amounts of data without human oversight or supervision. "
The software not only helped a robot deal efficiently with clutter, it surprisingly revealed the robot's creativity in solving problems.
"It was exploiting sort of superhuman capabilities," Siddhartha Srinivasa, associate professor of robotics, said of his lab's two-armed mobile robot, the Home Exploring Robot Butler, or HERB. "The robot's wrist has a 270-degree range, which led to behaviors we didn't expect. Sometimes, we're blinded by our own anthropomorphism. "
In one case, the robot used the crook of its arm to cradle an object to be moved.
"We never taught it that," Srinivasa added.
The rearrangement planner software was developed in Srinivasa's lab by Jennifer King, a Ph. D. student in robotics, and Marco Cognetti, a Ph. D. student at Sapienza University of Rome who spent six months in Srinivasa's lab. They will present their findings May 19 at the IEEE International Conference on Robotics and Automation in Stockholm, Sweden.
In addition to HERB, the software was tested on NASA's KRex robot, which is being designed to traverse the lunar surface. While HERB focused on clutter typical of a home, KRex used the software to find traversable paths across an obstacle-filled landscape while pushing an object.
Robots are adept at "pick-and-place" (P&P) processes, picking up an object in a specified place and putting it down at another specified place. Srinivasa said this has great applications in places where clutter isn't a problem, such as factory production lines. But that's not what robots encounter when they land on distant planets or, when "helpmate" robots eventually land in people's homes.
P&P simply doesn't scale up in a world full of clutter. When a person reaches for a milk carton in a refrigerator, he doesn't necessarily move every other item out of the way. Rather, a person might move an item or two, while shoving others out of the way as the carton is pulled out.
The rearrangement planner automatically finds a balance between the two strategies, Srinivasa said, based on the robot's progress on its task. The robot is programmed to understand the basic physics of its world, so it has some idea of what can be pushed, lifted or stepped on. And it can be taught to pay attention to items that might be valuable or delicate, in case it must extricate a bull from a china shop.
One limitation of this system is that once the robot has evaluated a situation and developed a plan to move an object, it effectively closes its eyes to execute the plan. Work is underway to provide tactile and other feedback that can alert the robot to changes and miscalculations and can help it make corrections when necessary. NASA, the National Science Foundation, Toyota Motor Engineering and Manufacturing and the Office of Naval Research supported this research.
The problem was in a new feature Facebook added to its service at the start of the month, as the ability to post videos as comments on other Facebook posts.
The researcher says that after fiddling around with some Facebook API requests, he was able to delete any video uploaded on the platform, based on its video ID.
"This bug is proof of flaw in logic rather than daily technical flaws which we see like RCE, SSRF, etc.," the researcher explains .
The issue, according to Hivarekar, is that when a user uploads a video as a comment, the video is uploaded to his Facebook profile, it's given a video ID, and then attached to the desired post based on that video ID.
In his tests, the researcher discovered that he could create a comment via the Facebook API, he could then send another API request to attach any video ID from any user as the comment, and he could later use another API request to delete the comment.
Since the video ID was attached to the comment, the video was removed as well. Hivarekar says that Facebook's employees forgot to add permission checks to see if the person deleting the comment was the owner of the comment, and the owner of the video.
The researcher says he reported the issue to Facebook via the company's bug bounty program on June 11, two days after the video commenting feature went live.
Facebook issued a temporary fixed after only 23 minutes, and later patched the bug for good after 11 hours. For his extremely critical bug, the researcher says Facebook gave him a five-digit bug bounty reward.
CAC stands for Common Access Card and describes the standard ID card for all DoD military and civilian personnel, selected reserves, and some contractors.
The CAC Scan app, as advertised on its Google Play Store description, is a simple app that scans the barcode found on these cards and outputs the encoded information on the phone's screen.
This includes the cardholder's first and last name, rank, EDIPI ID, and Social Security number.
The DoD says the app works as advertised and that it was created by a US citizen with ties to the US Army. The DoD also warns:
“ When you scan your (or someone else’s) CAC, where else does the data go; i.e., who else gets a copy of the results? Why would you need this app? You already know your personal info on your CAC… who’s info are you trying to obtain and why? ”
Security firm Lookout says they analyzed the app but didn't find any malicious behavior inside its code. The app was quite simplistic, but despite not containing any covert code, they said that they identified a potential attack vector.
When users want to scan a CAC code, CAC Scan loads a third-party app that's installed as a separate application on the user's smartphone. The app, called Barcode Scanner, is a very popular app and has been vetted by multiple security firms as clean.
Lookout identified that Barcode Scanner keeps a history of all the barcodes it scans. A potential attacker that queries for the list of installed apps and finds CAC Scan would automatically know it can search through Barcode Scanner's history to uncover data on CAC cards. This is a classic app collusion attack scenario
While the DoD was only warning against the app because of potential privacy issues, Lookout has managed to identify attack scenarios through which the app could lead to a compromise of US military personnel data.
The app is not available on the Google Play Store anymore, but it's unknown if it was Google or the developer that took it down.
For Chrome, Google uses a DRM component called Widevine which encrypts video content sent from premium services to the users' browsers. Google's Widevine DRM is used to play premium content from services like Netflix, YouTube Red, or Amazon Prime.
The researchers say they identified a bug in Chrome's Widevine implementation that allows them to intercept the video content while in transit from the Widevine module to the browser's video player.
For a short moment, the premium video content is stored in an unprotected area of the computer's memory. The two researchers created an application that extracts this data and then saves it to disk.
The researchers said they reported the issue to Google on May 24, but the company is still evaluating how to patch the bug. David Livshits and Alexandra Mikityuk, the two researchers that discovered the issue, said that if Google doesn't patch the bug in 90 days, they will release details about the bug to the public, giving movie pirates the ability to easily download any Netflix release with the push of a button.
A Google representative told Wired that the bug is not specific to Chrome, but to the entire Chromium project, meaning other browsers may also be affected, but not Safari, Firefox, IE, or Edge, which use different DRM modules.
The researchers said that forcing the Widevine DRM to run inside a Trusted Execution Environment (TEE) inside the computer's memory would fix the bug.
In other related news, rumors surfaced today that Netflix will soon allow its users to download movies to their PCs. While this negates the Chrome bug, other services are still affected.
The latest upgrade to Ruby, the popular open source dynamic language , will enhance both performance and simplicity.
A preview of the upgrade, Ruby 2.4.0, was released this week. The general release of is due on Christmas Day, December 25, with a beta release due several months prior to general availability, Ruby founder Yukihiro Matsumoto said.
Preview 1 improves performance by optimizing [x, y].max and .min methods to not create a temporary array under some conditions. The language also gets a performance boost by adding a Regexp#match method, which executes a regexp match without creating a back reference object, leading to reduced object allocation. Regexp holds a regular expression and is used to match patterns against strings. Version 2.4.0 also speeds up instance variable access.
Ruby 2.4.0 promotes simplification through the unification of Fixnum and Bignum integer classes. "In the early stage of Ruby development, I inherited integer class design from Lisp and Smalltalk," Matsumoto said. "Lisp has Fixnum and Bignum. Smalltalk has SmallInteger and BigInteger. But from 20 years of experience, we found out the distinction according to the integer size is artificial and not essential to the programming. "
To improve debugging, thread deadlock detection is enhanced in the upgrade, according to a bulletin on the release. "Ruby has deadlock detection around waiting threads, but its report doesn't include enough information for debugging. Ruby 2.4's deadlock detection shows threads with their backtrace and dependent threads. "
Also in version 2.4.0, the String/Symbol#upcase/downcase/swapcase/capitalize(!) method now supports Unicode case mappings instead of just ASCII mappings. "Unicode was not popular when we added Unicode support to Ruby. So after the discussion with experts -- including (XML co-founder) Tim Bray, who was a member of [the] Unicode consortium back then -- we decided to make those methods to support ASCII only," said Matsumoto. "But as years passed, everybody uses Unicode now, especially in the Web field, and we can rely on the case conversion table from Unicode.org. The new case conversion is more natural for programmers using non-ASCII characters. "
More about Symbol Unicode
I’ve struggled with the decision of whether or not to replace my computer for the better part of a year now. The Core i5-2500K powering my main rig is roughly four and a half years old at this point and despite bumping up to 16GB of RAM a few months back, it may soon be time to retire the faithful system.
This is without a doubt the longest I’ve gone between major updates. I could probably squeeze another couple of years out of my setup – maybe longer with some overclocking – but the performance I’d gain from a sixth-gen Core i7 sure is tempting.
With this week’s open forum, we’re curious to know, how long does a typical PC last you? Do you like to ride a platform into the sunset or do you prefer to keep pace with technology on a fairly regular basis? Let us know in the comments section below!
Quake, the influential first-person shooter from id Software, celebrated its 20th anniversary on Wednesday. In recognition of the occasion, director and designer John Romero published a Quake FAQ that was created on October 22, 1995, that served as a repository for all things Quake up to that point.
Romero notes on his blog that the FAQ, dubbed Quaketalk 95 , was created by Joost Shuur to keep people up to date on everything that had been posted about the game. This includes random tidbits that had been published in magazines as well as discussions in IRC chats and more.
One month after Quaketalk 95, Romero notes, the company held a big meeting that ultimately determined the final direction of the game that launched the following year.
On August 3, 1995, Romero released the first screenshots of Quake on the Internet. His blog post includes the full collection of images as well as the text that accompanied those 320 x 200 resolution screens caps. The post also includes a set of trading cards that someone made.
If you were a fan of Quake back in the day (or even now), the Quaketalk 95 FAQ will certainly induce that warm and fuzzy feeling associated with nostalgia. Quake arrived a few years before I made the jump to PC gaming but I’m sure some of you will appreciate Romero and Shuur's contributions.
In related news, id Software is working on a brand new Quake game. Unveiled at E3 2016, Quake Champions is described as a competitive multiplayer arena shooter. The teaser trailer below will have to hold you over until more details arrive at QuakeCon in August.
If you’ve got an idea for a killer app that’s going to make millions but are held back by a total lack of coding knowledge, Google could have the answer. The search giant is teaming up with online learning platform Udacity to offer a course that teaches people with zero experience how to create Android apps.
The Google Android Basics Nanodegree is aimed at that those who are new to programming looking to eventually become an Android developer. The course covers topics such as interactivity, layouts, object-oriented programming basics, data storage, and multi-screen apps.
“We built this program with Google specifically to support aspiring Android Developers with no programming experience. Our goal is to ensure you get the real-world skills you need to actually start building Android apps,” Udacity says.
The class costs $200 a month, but there is a week-long free trial available for anyone who wants to try before they buy. The course outline says it will take 165 hours to complete, which works out at around 21 days for full-time students working 8 hours a day.
The first 50 students to complete the degree will be awarded a full scholarship to enroll in the career-track Android Developer Nanodegree Program, which Google says is a critical step to becoming a successful Android developer.
For those who would like to learn the app-building skills but aren't concerned with receiving the degree, the individual elements from the program are available to study for free , though you can still pay for Udacity services like coaching and guidance.
Samsung has a new convertible notebook that’ll be hitting US stores early next week. It’s similar in design to Lenovo’s popular Yoga line in that the display can fold all the way back to transform the device into a tablet (or fold half way for tent / stand mode).
The Samsung Notebook 7 Spin will be available in three configurations – two with a 15.6-inch screen and a cheaper version with a smaller 13.3-inch display.
On the high end, you get a 15.6-inch Full HD display that’s powered by Intel’s Core i7-6500U clocked at 2.5GHz, a dual-core chip with four treads. It also comes with 16GB of RAM, Nvidia GeForce 940MX graphics with 2GB of VRAM, a 128GB solid state drive and a 1TB hard drive – all running Windows 10.
If you’re shopping on a budget, the 13.3-inch variant also includes a Full HD display with Intel’s Core i5-6200U processor ticking along at 2.3GHz (also a dual-core, quad-thread component). Elsewhere, you’ll find Intel HD Graphics 520, 8GB of RAM and 1TB of traditional hard drive space.
Pricing starts at $799 for the 13.3-inch model and tops out at $1,199 for the larger and faster Notebook 7 Spin. The third configuration, which slots between the aforementioned systems in terms of price, is virtually identical to the faster model except that it lacks the 128GB solid state drive and only has 12GB of RAM. Opting for this system will save you a couple hundred bucks as it checks in at $999 although if it were up to me, I’d do my best to spring for that 128GB SSD.
Look for Samsung’s new Notebook 7 Spin to go on sale starting June 26.
While advancements in electric truck technology continue to be made, there are still limitations due to factors such as size, weight, and expense. But an invention that has been around for almost 150 years could prove a solution to these issues.
German engineering company Siemens says that overhead electrical wires can be used to power electric trucks for theoretically unlimited distances. The vehicles pantograph power connectors can freely connect and disconnect to the overhead wires while traveling at speeds of up to 56 mph. The power the trucks draw can recharge an electric battery, which is used when traveling away from the electric roads.
Sweden has become the first country to test the conductive technology on a public highway. On a 2 kilometer (1.2 miles) stretch of the E16 highway near the city of Gavle just north of Stockholm, two diesel hybrid vehicles made by Scania and adapted in collaboration with Siemens will conduct the electric road trials over the next two years to see if it is suitable for wider deployment.
The test vehicles will operate in zero emission mode when connected to the overhead cables, switching back to diesel for operation outside of the contact lines. Siemens said the technologys open configuration would allow the trucks to use other forms of power, such as battery or natural gas.
"The Siemens eHighway is twice as efficient as conventional internal combustion engines. The Siemens innovation supplies trucks with power from an overhead contact line. This means that not only is energy consumption cut by half, but local air pollution is reduced too," says Roland Edel, Chief Engineer at the Siemens Mobility Division.
Siemens is also bringing electric road technology to the US. A 1-mile stretch of power lines on a highway near the ports of Long Beach and Los Angeles is currently under construction.
Sweden is one of several European countries, including Norway and the Netherlands , aiming for the majority of vehicles on its roads to be of the zero emission variety by 2030.
Students from ETH Zurich and Hochschule Luzern, two universities in Sweden, have developed and successfully demonstrated the world’s fastest accelerating electric vehicle.
A video from Academic Motorsports Club Zurich (AMZ) captures the historic albeit anti-climactic moment. Fortunately, the production team did a nice job of building up to the actual run in which the vehicle accelerated from 0 to 60 miles per hour in just 1.513 seconds.
The electric car, dubbed Grimsel, set the record while covering less than 100 feet of track at the Dübendorf air base near Zurich. The previous record of 1.779 seconds was set by a team from the University of Stuttgart last year.
You don’t have to be a car enthusiast to realize that’s ridiculously fast but if you need some convincing, the first few minutes of the entertaining clip should help put it into perspective.
The time is more than half a second faster than the world’s fastest accelerating production car, the hybrid-electric 2014 Porsche 918 Spyder, and nearly a full second faster to 60 than the renowned Bugatti Veyron (and the upcoming Bugatti Chiron ). For reference, Tesla’s blistering fast Model S with Ludicrous speed upgrade takes 2.6 seconds to do the same.
Some might be calling foul here as this isn’t a true production car. What’s more, the one-off race car is also incredibly small – hardly bigger than a go-kart. Even still, that’s incredibly fast for anything to reach 60 mph. Job well done!
The power to explore online social media movements -- from the pop cultural to the political -- with the same algorithmic sophistication as top experts in the field is now available to journalists, researchers and members of the public from a free, user-friendly online software suite released by scientists at Indiana University.
The Web-based tools, called the Observatory on Social Media, or "OSoMe" (pronounced "awesome"), provide anyone with an Internet connection the power to analyze online trends, memes and other online bursts of viral activity.
An academic pre-print paper on the tools is available in the open-access journal PeerJ .
"This software and data mark a major goal in our work on Internet memes and trends over the past six years," said Filippo Menczer, director of the Center for Complex Networks and Systems Research and a professor in the IU School of Informatics and Computing. The project is supported by nearly $1 million from the National Science Foundation.
"We are beginning to learn how information spreads in social networks, what causes a meme to go viral and what factors affect the long-term survival of misinformation online," Menczer added. "The observatory provides an easy way to access these insights from a large, multi-year dataset. "
The new tools are:
By plugging #thedress into the system, for example, OSoMe will generate an interactive graph showing connections between both the hashtag and the Twitter users who participated in the debate over a dress whose color -- white and gold or blue and black -- was strangely ambiguous. The results show more people tagged #whiteandgold compared to #blueandblack.
For the Ice Bucket Challenge, another widespread viral phenomenon -- in which people doused themselves in cold water to raise awareness about ALS -- the software generates an interactive graph showing how many people tweeted #icebucketchallenge at specific Twitter users, including celebrities.
One example illustrates a co-occurrence network, in which a single hashtag comprises a "node" with lines showing connections to other related hashtags. The larger the node, the more popular the hashtag. The other example illustrates a diffusion network, in which Twitter users show up as points on a graph, and retweets or mentions show up as connecting lines. The larger a cluster of people tweeting a meme -- or the more lines showing retweets and mentions -- the more viral the topic.
OSoMe's social media tools are supported by a growing collection of 70 billion public tweets. The long-term infrastructure to store and maintain the data is provided by the IU Network Science Institute and High Performance Computing group at IU. The system does not provide direct access to the content of these tweets.
The group that manages the infrastructure to store this data is led by Geoffrey Fox, Distinguished Professor in the School of Informatics and Computing. The group whose software analyzes the data is led by Judy Qiu, an associate professor in the school.
"The collective production, consumption and diffusion of information on social media reveals a significant portion of human social life -- and is increasingly regarded as a way to 'sense' social trends," Qiu said. "For the first time, the ability to explore 'big social data' is open not just to individuals with programming skills but everyone as easy-to-use visual tools. "
In addition to pop culture trends, Menczer said, OSoMe provides insight to many other subjects, including social movements or politics, as the online spread of information plays an increasingly important role in modern communication.
The IU researchers who created OSoMe also launched another tool, BotOrNot, in 2014. BotOrNot predicts the likelihood that a Twitter account is operated by a human or a "social bot. " Bots are online bits of code used to create the impression that a real person is tweeting about a given topic, such as a product or a person.
The OSoMe project also provides an application program interface, or API, to help other researchers expand upon the tools, or create "mash-ups" that combine its powers with other software or data sources.
Binghamton University computer science assistant professor Timothy Miller, Aaron Carpenter and graduate student Philip Dexterm, along with co-author Jeff Bush, have developed Nyami, a synthesizable graphics processor unit (GPU) architectural model for general-purpose and graphics-specific workloads. This marks the first time a team has taken an open-source GPU design and run a series of experiments on it to see how different hardware and software configurations would affect the circuit's performance.
According to Miller, the results will help other scientists make their own GPUs and push computing power to the next level.
"As a researcher, it's important to have tools for realistically evaluating new ideas that may improve performance, energy efficiency, or other challenges in processor architecture," Miller said. "While simulators may take shortcuts, an actual synthesizable open source processor can't cut any corners, so we can say that any experimental results we get are especially reliable. "
GPUs have existed for about 40 years and are typically found on commercial video or graphics cards inside of a computer or gaming console. The specialized circuits have computing power designed to make images appear smoother and more vibrant on a screen. There has recently been a movement to see if the chip can be applied to non-graphical computations such as algorithms processing large chunks of data.
"We weren't necessarily looking for novelty in the results, so much as we wanted to create a new tool and then show how it could be used," said Carpenter. "I hope people experiment more effectively on GPUs, as both hobbyists and researchers, creating a more efficient design for future GPUs. "
The open-source GPU that the Binghamton team used for their research was the first of its kind. Although thousands of GPUs are produced each year commercially, this is the first that can be modified by enthusiasts and researchers to get a sense of how changes may affect mainstream chips. Bush, the director of software engineering at Roku, was the lead author on the paper.
"It was bad for the open-source community that GPU manufacturers had all decided to keep their chip specifications secret. That prevented open source developers from writing software that could utilize that hardware," Miller said. Miller began working on similar projects in 2004, while Bush started working on Nyami in 2010. "This makes it easier for other researchers to conduct experiments of their own, because they don't have to reinvent the wheel. With contributions from the 'open hardware' community, we can incorporate more creative ideas and produce an increasingly better tool. "
The ramifications of the findings could make processors easier for researchers to work with and explore different design tradeoffs. Dexter, Miller, Carpenter and Bush have paved a new road that could lead to discoveries affecting everything from space travel to heart surgery.
"I've got a list of paper research ideas we can explore using Nyuzi [the chip has since been renamed], focusing on various performance bottlenecks. The idea is to look for things that make Nyuzi inefficient compared to other GPUs and address those as research problems. We can also use Nyuzi as a platform for conducting research that isn't GPU-specific, like energy efficiency and reliability," Miller said.
The paper, "Nyami: A Synthesizable GPU Architectural Model for General-Purpose and Graphics-Specific Workloads" appeared in International Symposium on Performance Analysis of Systems and Software. It can be accessed at: http://www.cs.binghamton.edu/~millerti/nyami-ispass2015.pdf
But what if you decide to make changes? You may have to go back, change the design and print the whole thing again, perhaps more than once. So Cornell researchers have come up with an interactive prototyping system that prints what you are designing as you design it; the designer can pause anywhere in the process to test, measure and, if necessary, make changes that will be added to the physical model still in the printer.
"We are going from human-computer interaction to human-machine interaction," said graduate student Huaishu Peng, who described the On-the-Fly-Print system in a paper presented at the 2016 ACM Conference for Human Computer Interaction. Co-authors are François Guimbretière, associate professor of information science; Steve Marschner, professor of computer science; and doctoral student Rundong Wu.
Their system uses an improved version of an innovative "WirePrint" printer developed in a collaboration between Guimbretière's lab and the Hasso Platner Institute in Potsdam, Germany.
In conventional 3-D printing, a nozzle scans across a stage depositing drops of plastic, rising slightly after each pass to build an object in a series of layers. With the WirePrint technique the nozzle extrudes a rope of quick-hardening plastic to create a wire frame that represents the surface of the solid object described in a computer-aided design (CAD) file. WirePrint aimed to speed prototyping by creating a model of the shape of an object instead of printing the entire solid. The On-the-Fly-Print system builds on that idea by allowing the designer to make refinements while printing is in progress.
The new version of the printer has "five degrees of freedom. " The nozzle can only work vertically, but the printer's stage can be rotated to present any face of the model facing up; so an airplane fuselage, for example, can be turned on its side to add a wing. There is also a cutter to remove parts of the model, say to give the airplane a cockpit.
The nozzle has been extended so it can reach through the wire mesh to make changes inside. A removable base aligned by magnets allows the operator to take the model out of the printer to measure or test to see if it fits where it's supposed to go, then replace it in the precise original location to resume printing.
The software -- a plug-in to a popular CAD program -- designs the wire frame and sends instructions to the printer, allowing for interruptions. The designer can concentrate on the digital model and let the software control the printer. Printing can continue while the designer works on the CAD file, but will resume when that work is done, incorporating the changes into the print.
As a demonstration the researchers created a model for a toy airplane to fit into a Lego airport set. This required adding wings, cutting out a cockpit for a Lego pilot and frequently removing the model to see if the wingspan is right to fit on the runway. The entire project was completed in just 10 minutes.
By creating a "low-fidelity sketch" of what the finished product will look like and allowing the designer to redraw it as it develops, the researchers said, "We believe that this approach has the potential to improve the overall quality of the design process. "
A video can be found here: https://www.youtube.com/watch?v=X68cfl3igKE
The flashback is due to the speed of today's underwater communication networks, which is comparable to the sluggish dial-up modems from America Online's heyday. The shortcoming hampers search-and-rescue operations, tsunami detection and other work.
But that is changing due in part to University at Buffalo engineers who are developing hardware and software tools to help underwater telecommunication catch up to its over-the-air counterpart.
Their work, including ongoing collaborations with Northeastern University, is described in a study -- Software-Defined Underwater Acoustic Networks: Toward a High-Rate Real-Time Reconfigurable Modem -- published in November in IEEE Communications Magazine .
"The remarkable innovation and growth we've witnessed in land-based wireless communications has not yet occurred in underwater sensing networks, but we're starting to change that," says Dimitris Pados, PhD, Clifford C. Furnas Professor of Electrical Engineering in the School of Engineering and Applied Sciences at UB, a co-author of the study.
The amount of data that can be reliably transmitted underwater is much lower compared to land-based wireless networks. This is because land-based networks rely on radio waves, which work well in the air, but not so much underwater.
As a result, sound waves (such as the noises dolphins and whales make) are the best alternative for underwater communication. The trouble is that sound waves encounter such obstacles as path loss, delay and Doppler which limit their ability to transmit. Underwater communication is also hindered by the architecture of these systems, which lack standardization, are often proprietary and not energy-efficient. Pados and a team of researchers at UB are developing hardware and software -everything from modems that work underwater to open-architecture protocols -- to address these issues. Of particular interest is merging a relatively new communication platform, software-defined radio, with underwater acoustic modems.
Traditional radios, such as an AM/FM transmitter, operate in a limited bandwidth (in this case, AM and FM). The only way to pick up additional signals, such as sound waves, is to take the radio apart and rewire it. Software-defined radio makes this step unnecessary. Instead, the radio is capable via computer of shifting between different frequencies of the electromagnetic spectrum. It is, in other words, a "smart" radio.
Applying software-defined radio to acoustic modems could vastly improve underwater data transmission rates. For example, in experiments last fall in Lake Erie, just south of Buffalo, New York, graduate students from UB proved that software-defined acoustic modems could boost data transmission rates by 10 times what today's commercial underwater modems are capable of.
The potential applications for such technology includes:
The Duke team used the software and images to assess recent forest loss restricting the movement of Peru's critically endangered San Martin titi monkey ( Callicebus oenanthe ) and identify the 10 percent of remaining forest in the species' range that presents the best opportunity for conservation.
"Using these tools, we were able to work with a local conservation organization to rapidly pinpoint areas where reforestation and conservation have the best chance of success," said Danica Schaffer-Smith, a doctoral student at Duke's Nicholas School of the Environment, who led the study. "Comprehensive on-the-ground assessments would have taken much more time and been cost-prohibitive given the inaccessibility of much of the terrain and the fragmented distribution and rare nature of this species. "
The San Martin titi monkey inhabits an area about the size of Connecticut in the lowland forests of north central Peru. It was recently added to the International Union for Conservation of Nature's list of the 25 most endangered primates in the world.
Increased farming, logging, mining and urbanization have fragmented forests across much of the monkey's once-remote native range and contributed to an estimated 80 percent decrease in its population over the last 25 years.
Titi monkeys travel an average of 663 meters a day, primarily moving from branch to branch to search for food, socialize or escape predators. Without well-connected tree canopies, they're less able to survive local threats and disturbances, or recolonize in suitable new habitats. The diminutive species, which typically weighs just two to three pounds at maturity, mate for life and produce at most one offspring a year. Mated pairs are sometimes seen intertwining their long tails when sitting next to each other.
Armed with Aster and Landsat satellite images showing the pace and extent of recent forest loss, and GeoHAT, a downloadable geospatial habitat assessment toolkit developed at Duke, Schaffer-Smith worked with Antonio Bóveda-Penalba, program coordinator at the Peruvian NGO Proyecto Mono Tocón, to prioritize where conservation efforts should be focused.
"The images and software, combined with Proyecto Mono Tocón's detailed knowledge of the titi monkey's behaviors and habitats, allowed us to assess which patches and corridors of the remaining forest were the most critical to protect," said Jennifer Swenson, associate professor of the practice of geospatial analysis at Duke, who was part of the research team.
The team's analysis revealed that at least 34 percent of lowland forests in the monkey's northern range, Peru's Alto Mayo Valley, have been lost. It also showed that nearly 95 percent of remaining habitat fragments are likely too small and poorly connected to support viable populations; and less than 8 percent of all remaining suitable habitats lie within existing conservation areas.
Areas the model showed had the highest connectivity comprise just 10 percent of the remaining forest in the northern range, along with small patches elsewhere. These forests present the best opportunities for giving the highly mobile titi monkey the protected paths for movement it needs to survive.
Based on this analysis, the team identified a 10-kilometer corridor between Peru's Morro de Calzada and Almendra conservation areas as a high priority for protection.
"For many rare species threatened by active habitat loss, the clock is literally ticking," Schaffer-Smith said. "Software tools like GeoHAT -- or similar software such as CircuitScape -- can spell the difference between acting in time to save them or waiting till it's too late. "
Schaffer-Smith, Swenson and Bóveda-Penalba published their peer-reviewed research March 16 in the journal Environmental Conservation.
GeoHAT is a suite of ArcGIS geoprocessing tools designed to evaluate overall habitat quality and connectivity under changing land-use scenarios. It was developed by John Fay, an instructor in the Geospatial Analysis Program at Duke's Nicholas School, and can be used to assess habitats for a wide range of land-based species. (Learn More: http://sites.duke.edu/johnfay/projects/geohat/ )
"Because of lack of health personnel in many developing countries, ear infections are often misdiagnosed or not diagnosed at all. This may lead to hearing impairments, and even to life-threatening complications," says Claude Laurent, researcher at the Department of Clinical Sciences at Umeå University and co-author of the article. "Using this method, health personnel can diagnose middle ear infections with the same accuracy as general practitioners and paediatricians. Since the system is cloud-based, meaning that the images can be uploaded and automatically analysed, it provides rapid access to accurate and low-cost diagnoses in developing countries. "
The researchers at Umeå University have collaborated with the University of Pretoria in South Africa in their effort to develop an image-processing technique to classify otitis media. The technique was recently described in the journal EBioMedicine -- a new Lancet publication.
The software system consists of a cloud-based analysis of images of the eardrum taken using an otoscope, which is an instrument normally used in the medical examination of ears. Images of eardrums, taken with a digital otoscope connected to a smartphone, were compared to high-resolution images in an archive and automatically categorised according to predefined visual features associated with five diagnostic groups.
Tests showed that the automatically generated diagnoses based on images taken with a commercial video-otoscope had an accuracy of 80.6 per cent, while an accuracy of 78.7 per cent was achieved for images captured on-site with a low cost custom-made video-otoscope. This high accuracy can be compared with the 64-80 per cent accuracy of general practitioners and paediatricians using traditional otoscopes for diagnosis.
"This method has great potential to ensure accurate diagnoses of ear infections in countries where such opportunities are not available at present. Since the method is both easy and cheap to use, it enables rapid and reliable diagnoses of a very common childhood illness," says Claude Laurent.
More and more safety-critical embedded electronic solutions are based on rapid, energy-efficient multi-core processors. "Two of the most important requirements of future applications are an increased performance in real time and further reduction of costs without adversely affecting functional safety," Professor Jürgen Becker of the Institute for Information Processing Technology (ITIV) of KIT says, who coordinates ARGO. "For this, multi-core processors have to make available the required performance spectrum at minimum energy consumption in an automated and efficiently programmed manner. "
Multi-core systems are characterized by the accommodation of several processor cores on one chip. The cores work in parallel and, hence, reach a higher speed and performance. Programming of such heterogeneous multi-core processors is very complex. Moreover, the programs have to be tailored precisely to the target hardware and to fulfill the additional real-time requirements. The ARGO EU research project, named after the very quick vessel in Greek mythology, is aimed at significantly facilitating programming by automatic parallelization of model-based applications and code generation. So far, a programmer had to adapt his code, i.e. the instructions for the computer, to the hardware architecture, which is associated with a high expenditure and prevents the code from being transferred to other architectures.
"Under ARGO, a new standardizable tool chain for programmers is being developed. Even without precise knowledge of the complex parallel processor hardware, the programmers can control the process of automatic parallelization in accordance with the requirements. This results in a significant improvement of performance and a reduction of costs," Becker says.
In the future, the ARGO tool chain can be used to manage the complexity of parallelization and adaptation to the target hardware in a largely automated manner with a small expenditure. Under the project, real-time-critical applications in the areas of real-time flight dynamics simulation and real-time image processing are studied and evaluated by way of example.
Conventional light microscopy can attain only a defined lower resolution limit that is restricted by light diffraction to roughly 1/4 of a micrometre. High resolution fluorescence microscopy makes it possible to obtain images with a resolution markedly below these physical limits. The physicists Stefan Hell, Eric Betzig, and William Moerner were awarded the Nobel Prize in 2014 for developing this important key technology for biomedical research. Currently, one of the ways in which researchers in this domain are trying to attain a better resolution is by using structured illumination. At present, this is one of the most widespread procedures for representing and presenting dynamic processes in living cells. This method achieves a resolution of 100 nanometres with a high frame rate while simultaneously not damaging the specimens during measurement. Such high resolution fluorescence microscopy is also being applied and further developed in the Biomolecular Photonics Group at Bielefeld's Faculty of Physics. For example, it is being used to study the function of the liver or the ways in which the HI virus spreads.
However, scientists cannot use the raw images gained with this method straight away. 'The data obtained with the microscopy method require a very laborious mathematical image reconstruction. Only then do the raw data recorded with the microscope result in a high-resolution image,' explains Professor Dr. Thomas Huser, head of the Biomolecular Photonics Group. Because this stage requires a complicated mathematical procedure that has been accessible for only a few researchers up to now, there was previously no open source software solution that was easily available for all researchers. Huser sees this as a major obstacle to the use and further development of the technology. The software developed in Bielefeld is now filling this gap.
Dr. Marcel Müller from the Biomolecular Photonics Group has managed to produce such universally implementable software. 'Researchers throughout the world are working on building new, faster, and more sensitive microscopes for structured illumination, particularly for the two-dimensional representation of living cells. For the necessary post-processing, they no longer need to develop their own complicated solutions but can use our software directly, and, thanks to its open source availability, they can adjust it to fit their problems,' Müller explains. The software is freely available to the global scientific community as an open source solution, and as soon as its availability was announced, numerous researchers, particularly in Europe and Asia, requested and installed it. 'We have already received a lot of positive feedback,' says Marcel Müller. 'That also reflects how necessary this new development has been.'
Network packet capture performs essential functions in modern network management such as attack analysis, network troubleshooting, and performance debugging. As the network edge bandwidth currently exceeds 10 Gbps, the demand for scalable packet capture and retrieval is rapidly increasing. However, existing software-based packet capture systems neither provide high performance nor support flow-level indexing for fast query response. This would either prevent important packets from being stored or make it too slow to retrieve relevant flows.
A research team led by Professor KyoungSoo Park and Professor Yung Yi of the School of Electrical Engineering at Korea Advanced Institute of Science and Technology (KAIST) have recently presented FloSIS, a highly scalable software-based network traffic capture system that supports efficient flow-level indexing for fast query response.
FloSIS is characterized by three key advantages. First, it achieves high-performance packet capture and disk writing by exercising full parallelism in computing resources such as network cards, CPU cores, memory, and hard disks. It adopts the PacketShader I/O Engine (PSIO) for scalable packet capture and performs parallel disk writes for high-throughput flow dumping. Towards high zero-drop performance, it strives to minimize the fluctuation of packet processing latency.
Second, FloSIS generates two-stage flow-level indexes in real time to reduce the query response time. The indexing utilizes Bloom filters and sorted arrays to quickly reduce the search space of a query. Also, it is designed to consume only a small amount of memory while allowing flexible queries with wildcards, ranges of connection tuples, and flow arrival times.
Third, FloSIS supports flow-level content deduplication in real time for storage savings. Even with deduplication, the system still records the packet-level arrival time and headers to provide the exact timing and size information. For an HTTP connection, FloSIS parses the HTTP response header and body to maximize the hit rate of deduplication for HTTP objects.
These design choices bring enormous performance benefits. On a server machine with dual octa-core CPUs, four 10Gbps network interfaces, and 24 SATA disks, FloSIS achieves up to 30 Gbps for packet capture and disk writing without a single packet drop. Its indexes take up only 0.25% of the stored content while avoiding slow linear disk search and redundant disk access. On a machine with 24 hard disks of 3 TB, this translates into 180 GB for 72 TB total disk space, which could be managed entirely in memory or stored into solid state disks for fast random access. Finally, FloSIS deduplicates 34.5% of the storage space for 67 GB of a real traffic trace only with 256 MB of extra memory consumption for a deduplication table. In terms of performance, it achieves about 15 Gbps zero-drop throughput with real-time flow deduplication.
This work is presented at 2015 USENIX Annual Technical Conference (ATC) on July 10 2015 in Santa Clara, California.
Computer programs often contain defects, or bugs, that need to be found and repaired. This manual "debugging" usually requires valuable time and resources. To help developers debug more efficiently, automated debugging solutions have been proposed. One approach goes through information available in bug reports. Another goes through information collected by running a set of test cases. Until now, explains David Lo from Singapore Management University's (SMU) School of Information Systems, there has been a "missing link" that prevents these information gathering threads from being combined.
Dr Lo, together with colleagues from SMU, has developed an automated debugging approach called Adaptive Multimodal Bug Localisation (AML). AML gleans debugging hints from both bug reports and test cases, and then performs a statistical analysis to pinpoint program elements that are likely to contain bugs.
"While most past studies only demonstrate the applicability of similar solutions for small programs and 'artificial bugs' [bugs that are intentionally inserted into a program for testing purposes], our approach can automate the debugging process for many real bugs that impact large programs," Dr Lo explains. AML has been successfully evaluated on programs with more than 300,000 lines of code. By automatically identifying buggy code, developers can save time and redirect their debugging effort to designing new software features for clients.
Dr Lo and his colleagues are now planning to contact several industry partners to take AML one step closer toward integration as a software development tool.
Dr Lo's future plans involve developing an Internet-scale software analytics solution. This would involve analysing massive amounts of data that passively exist in countless repositories on the Internet in order to transform manual, pain-staking and error-prone software engineering tasks into automated activities that can be performed efficiently and reliably. This is done, says Dr Lo, by harvesting the wisdom of the masses -- accumulated through years of effort by thousands of software developers -- hidden in these passive, distributed and diversified data sources.
Google Glass, one of the newest forms of wearable technology, offers researchers a hands-free and flexible monitoring system. To make Google Glass work for their purposes, Zhang et al. custom developed hardware and software that takes advantage of voice control command ("ok glass") and other features in order to not only monitor but also remotely control their liver- and heart-on-a-chip systems. Using valves remotely activated by the Glass, the team introduced pharmaceutical compounds on liver organoids and collected the results. Their results appear this week in Scientific Reports.
"We believe such a platform has widespread applications in biomedicine, and may be further expanded to health care settings where remote monitoring and control could make things safer and more efficient," said senior author Ali Khademhosseini, PhD, Director of the Biomaterials Innovation Research Center at BWH.
"This may be of particular importance in cases where experimental conditions threaten human life -- such as work involving highly pathogenic bacteria or viruses or radioactive compounds," said leading author, Shrike Zhang, PhD, also of BWH's Biomedical Division.