Articles

46 articles, 2016-07-05 06:00

 

 1 

Tesla is being investigated in wake of first autopilot-related death (1.02/2)

The National Highway Traffic Safety Administration (NHTSA) has launched a “preliminary evaluation” into an accident involving a 40-year-old man that was killed while his Tesla Model S was in autopilot mode.
This is the first known fatality in more than 130 million miles where autopilot was activated, Tesla said.
According to the Levy Journal , the accident took place on May 7 in Williston, Florida. The victim, identified as Joshua Brown, was reportedly an active member of the Tesla subreddit. Roughly two months ago, a video of his Model S autopilot avoiding a crash went viral and was even tweeted by Tesla CEO Elon Musk.
In a statement on its website, Tesla said Brown had a loving family and was a friend of both Tesla and the broader EV community.
According to Tesla, the vehicle was traveling on a divided highway with autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky. As such, the brakes were never applied.
The car essentially drove “under” the gap of the trailer, making contact with one of its weakest points: the windshield. Had the accident involved the front or the rear of the trailer, Tesla said, its crash safety system would likely have prevented serious injury.
Tesla extended its deepest sympathies to Brown's friends and family.

 

 2 

IBM to Buy EZSource to Help Developers Modernize Mainframe Apps (0.01/2)

IBM will acquire EZSource to help customers modernize mainframe apps for migration to hybrid cloud as part of digital transformation efforts.
The mainframe is not dead, and IBM is doing its part to ensure that big iron will not be going anywhere for some time.
Big Blue announced on June 1 its intent to acquire EZ Legacy ( EZSource ), an Israel-based application discovery company, to help developers easily understand and change mainframe code based on data displayed on a dashboard and other visualizations. The acquisition is expected to close in the second quarter of 2016. Financial terms of the deal were not released.
The ability to quickly evaluate existing enterprise applications and modify them represents a critical piece of the digital transformation puzzle. IBM is intent on helping its customers transform their organizations for the digital age by gaining value out of their mainframe assets. The company will integrate its expertise in hybrid cloud, the API economy and DevOps with EZSource's mainframe application modernization technology.
EZSource provides a visual dashboard that shows developers which applications have changed to ease the process of modernizing applications, exposing APIs and opening up development resources.
IBM's decision to acquire its long-term partner EZSource is largely driven by  the fact that the digital transformation and API economy is estimated at being a $3.75 billion market, and to capture some of this share, companies must first understand and modify legacy mainframe software to be at the center of their digital enterprise, IBM said in a post by Mary Hall, an IBM marketing and social media staffer, on the IBM Mainframe Insights blog .
"The mainframe is the backbone of today's businesses," said Ross A. Mauri, general manager of IBM z Systems, in a statement. "As clients drive their digital transformation, they are seeking the innovation and business value from new applications while leveraging their existing assets and processes. "
Combining EZSource's offerings with IBM's will obviate the need for developers with specialized skills handling processes that previously were manually intensive, Mauri noted.
IBM's API management solutions, including z/OS Connect and IBM API Connect , integrated with EZSource's technology will help connect services from core mainframe applications to new mobile, cloud, social and cognitive workloads, IBM said.
"While they have always been highly exaggerated, rumors of the IBM mainframe's death continue to circulate," said Charles King, principal analyst at Pund-IT. "The platform's notable longevity and success are due to numerous factors but first and foremost has been IBM's efforts to continually evolve mainframe technologies and make them relevant for new business processes and use cases. "
King said this deal should add a few more years to the mainframe's "remarkable" life.
Meanwhile, IBM's DevOps offerings, such as IBM Application Delivery Foundation for z Systems and IBM Rational Team Concert , will combine with the EZSource software to help developers migrate legacy mainframe apps faster.
IBM said EZSource provides developers with information about which sections of code access a particular entity, such as a database table, so they can easily check them to see if updates are needed. Without the advanced analytics in the EZSource solution, developers would need to manually check thousands or millions of lines of code to find the ones that need to be changed.
EZSource delivers three key products:
-- EZSource:Analyze , which provides a graphical visualization of application discovery and understanding for developers and architects;
-- EZSource:Dashboard , which offers multiple categories of application metrics for managers and executives; and
-- EZSource:Server , which integrates with third-party source code management, workload automation and CMDB tooling systems to provide application to infrastructure mapping.
"The subtext of IBM's purchase of EZSource is the critical importance of reconciling and integrating new mobile and social apps with traditional backend 'systems of record'—particularly the IBM mainframes residing in major banks and financial organizations that power 30 billion business transactions every day," King said.
Supporting and streamlining the integration process is crucial for IBM and its customers since failure could cripple emerging processes, like smartphone "pay" apps, he added.
Meanwhile, enabling a newer generation of developers to support the mainframe has been Compuware's mission for the last several years. Tools that provide deep application understanding via visualization enable both mainframe and non-mainframe developers to manipulate mainframe data and implement code changes, faster and with fewer mistakes, Compuware said.
"As businesses increasingly compete via digital means and the mainframe serves as a back-end server for mobile and web front ends, development teams must keep pace with the requirements of modern application delivery," Chris O'Malley CEO of Compuware, told eWEEK. Compuware's Topaz product has offered visualization tools since January 2015, he said.

 

 3 

Predicting cell behavior with a mathematical model -- ScienceDaily (0.01/2)

One of the most important foundations of the modern Life Sciences is being able to cultivate cells outside the body and to observe them with optical microscopes. In this way, cellular processes can be analysed in much more quantitative detail than in the body. However, at the same time a problem arises. "Anyone who has ever observed biological cells under a microscope knows how unpredictable their behaviour can be. When they are on a traditional culture dish they lack 'orientation', unlike in their natural environment in the body. That is why, regarding certain research issues, it is difficult to derive any regularities from their shape and movement," explains Prof. Schwarz. In order to learn more about the natural behaviour of cells, the researchers therefore resort to methods from materials science. The substrate for microscopic study is structured in such a way that it normalises cell behaviour. The Heidelberg physicists explain that with certain printing techniques, proteins are deposited on the substrate in geometrically well-defined areas. The cell behaviour can then be observed and evaluated with the usual microscopy techniques.
The group of Ulrich Schwarz aims at describing in mathematical terms the behaviour of biological cells on micropatterned substrates. Such models should make it possible to quantitatively predict cell behaviour for a wide range of experimental setups. For that purpose, Philipp Albert has developed a complicated computer programme which considers the essential properties of individual cells and their interaction. It can also predict how large collections of cells behave on the given geometric structures. He explains: "Surprising new patterns often emerge from the interplay of several cells, such as streams, swirls and bridges. As in physical systems, e.g. fluids, the whole is here more than the sum of its parts. Our software package can calculate such behaviour very rapidly. " Dr Albert's computer simulations show, for example, how skin cell ensembles can overcome gaps in a wound model up to about 200 micrometres.
Another promising application of these advances is investigated by Dr. Holger Erfle and his research group at the BioQuant Centre, namely high throughput screening of cells. Robot-controlled equipment is used to carry out automatic pharmacological or genetic tests with many different active substances. They are, for example, designed to identify new medications against viruses or for cancer treatment. The new software now enables the scientists to predict what geometries are best suited for a certain cell type. The software can also show the significance of changes in cell behaviour observed under the microscope.
The research projects by Prof. Schwarz, Dr. Albert and Dr. Erfle received European Union funding from 2011 to 2015 via the program "Micropattern-Enhanced High Throughput RNA Interference for Cell Screening" (MEHTRICS). Besides the BioQuant Centre, this consortium included research groups from Dresden, France, Switzerland and Lithuania. The total support for the projects amounted to EUR 4.4 million euros.

 

 4 

Robots get creative to cut through clutter: Algorithm balances 'pick and place' with 'push and shove' -- ScienceDaily

The software not only helped a robot deal efficiently with clutter, it surprisingly revealed the robot's creativity in solving problems.
"It was exploiting sort of superhuman capabilities," Siddhartha Srinivasa, associate professor of robotics, said of his lab's two-armed mobile robot, the Home Exploring Robot Butler, or HERB. "The robot's wrist has a 270-degree range, which led to behaviors we didn't expect. Sometimes, we're blinded by our own anthropomorphism. "
In one case, the robot used the crook of its arm to cradle an object to be moved.
"We never taught it that," Srinivasa added.
The rearrangement planner software was developed in Srinivasa's lab by Jennifer King, a Ph. D. student in robotics, and Marco Cognetti, a Ph. D. student at Sapienza University of Rome who spent six months in Srinivasa's lab. They will present their findings May 19 at the IEEE International Conference on Robotics and Automation in Stockholm, Sweden.
In addition to HERB, the software was tested on NASA's KRex robot, which is being designed to traverse the lunar surface. While HERB focused on clutter typical of a home, KRex used the software to find traversable paths across an obstacle-filled landscape while pushing an object.
Robots are adept at "pick-and-place" (P&P) processes, picking up an object in a specified place and putting it down at another specified place. Srinivasa said this has great applications in places where clutter isn't a problem, such as factory production lines. But that's not what robots encounter when they land on distant planets or, when "helpmate" robots eventually land in people's homes.
P&P simply doesn't scale up in a world full of clutter. When a person reaches for a milk carton in a refrigerator, he doesn't necessarily move every other item out of the way. Rather, a person might move an item or two, while shoving others out of the way as the carton is pulled out.
The rearrangement planner automatically finds a balance between the two strategies, Srinivasa said, based on the robot's progress on its task. The robot is programmed to understand the basic physics of its world, so it has some idea of what can be pushed, lifted or stepped on. And it can be taught to pay attention to items that might be valuable or delicate, in case it must extricate a bull from a china shop.
One limitation of this system is that once the robot has evaluated a situation and developed a plan to move an object, it effectively closes its eyes to execute the plan. Work is underway to provide tactile and other feedback that can alert the robot to changes and miscalculations and can help it make corrections when necessary. NASA, the National Science Foundation, Toyota Motor Engineering and Manufacturing and the Office of Naval Research supported this research.

 

 5 

IBM Launches NYC Bluemix Garage With Former Azure Exec

Led by a former Microsoft executive, IBM opened a new Bluemix Garage in New York City to focus on blockchain, fintech and financial services.
Shawn Murray was perfectly happy as the senior director of Azure digital sales at Microsoft, as he had been with the company for 18 years—continually moving up the ladder.
Then Murray got a glimpse of what IBM was doing with its Bluemix platform and its outreach to developers, and he decided to make a change. Murray joined IBM as worldwide director of Bluemix and Blockchain Garages.
Speaking with eWEEK this week about the launch of IBM's latest Bluemix Garage in New York City, Murray said it was IBM's focus on design in addition to its cloud and developer focus that won him over. Steve Robinson, general manager of IBM Cloud Platform Services, who had a key role in establishing the IBM Bluemix Garages, helped recruit Murray away from Microsoft—where he'd spent the last seven years leading Azure sales both in the enterprise and the ISV spaces.
"I was pretty happy with my role at Microsoft, but once he told me about the garages and I started digging into what they do here, I knew this was the place for me," Murray said in an interview. "Because what they've done at IBM is truly magical. "
Murray said IBM has combined the technical capabilities and the roles of the developers and the architects with designers who have Ph. D.'s in psychology and design thinking, and it has built this entire method around how to build apps in an innovative way.
"Microsoft just didn't have that," he said. "They could help you build an app. But IBM's difference is that whole process and the design thinking. "
The design element is what made a difference for Murray. For example, one of the Bluemix Garage engagements Murray sat in on was a small startup out of San Francisco that had a complete idea and knew exactly what it wanted to build. IBM had the company come to the garage for a design thinking workshop to help it visualize what it was trying to solve and what experience it wanted its end users to have. And the design workshop, the startup abandoned the idea it initially had because it realized that what it was trying to build wasn't really what it was trying to solve.
"That's the differentiator for us," Murray said. "We have the combination of designers who can think through challenges. Not just visual designers, but experience designers, and we bundle that with architects and the developer assets that we have. So for me personally, it was just a perfect fit. "
IBM is indeed serious about design. In April, Big Blue established a new Distinguished Designer program and placed it on the level of the company's 2-decade-old Distinguished Engineer program. IBM recognizes design as a technical craft that is as critical as engineering to the long-term success of the company and a key driver of value for its customers, said Fahad Osmani, talent director for IBM Design.
IBM Design is three years into its mission of driving a culture of design within the company. The company has built what it claims is the world's largest design team, with 1,250 designers and 29 design studios around the world. Designers work on multidisciplinary teams on IBM products; digital engagement platforms for customers via the company's digital agency, IBM Interactive Experience (IBM iX); and branding and marketing initiatives.

 

 6 

SmartBear Collaborator 10 Enables Collaboration Across Dev Teams

SmartBear Collaborator 10.0 improves code collaboration with integrations for Microsoft Visual Studio, Microsoft Word and IBM Rational Team Concert. SmartBear Software , a provider of software quality tools for developers, announced the release of a major revision of its developer collaboration tool, Collaborator 10.0.
SmartBear's Collaborator enables development, testing and management teams to work better together to produce high quality code, applications and services. The tool enables users to review user stories and requirements, review code, communicate within and across teams, and deliver quality software all from one interface.
The new release introduces Community, Team and Enterprise editions to better serve software development teams of all sizes. In addition, SmartBear Collaborator provides integration with Microsoft Visual Studio, IBM Rational Team Concert and the ability to review Microsoft Word and Adobe PDF files.
"This major release of Collaborator 10.0 delivers new functionality including full integration with Visual Studio, which makes review creation and participation very easy," said Justin Collier, product owner for Collaborator at SmartBear, in a statement. "This leads to higher review participation across development teams and ultimately more reliable and higher quality products. "
Collier noted that the new Team version of Collaborator extends the product to organizations that do not need require all of the functionality of Collaborator's more comprehensive Enterprise edition. The Enterprise edition is aimed at organizations that require scalable and customizable code and documentation team review, he said.
Collaborator also integrates with a host of other version control systems and integrated development environments (IDEs), including Gut, Subversion, Microsoft team Foundation server, Perforce, GitHub, Eclipse, and bug tracking systems such as Jira and BugZilla.
SmartBear is demonstrating SmartBear Collaborator 10.0 at the Visual Studio Live conference this week in Cambridge, Mass.
Meanwhile, SmartBear also released the results of a recent survey on the benefits of doing code reviews. In the survey of more than 600 developers, testers and IT operations professionals, 90 percent said the biggest benefit of code reviews is improved software quality. Seventy-two percent said their biggest benefit was sharing knowledge across teams, and 59 percent said enhanced maintainability of code was big for them.
Last month, SmartBear announced it had acquired CrossBrowserTesting, an automated cloud testing platform. The acquisition enabled CrossBrowserTesting to further accelerate and scale its Web and mobile cloud testing solution using SmartBear's global resources. Financial terms of the deal were not disclosed.
At the time of the acquisition, SmartBear said CrossBrowserTesting had more than 200,000 users, with more than 5 million tests run to date. CrossBrowserTesting provides a cloud testing environment with more than 1,500 mobile and desktop browsers in more than 65 operating systems, including iOS, Android and Windows.
"CrossBrowserTesting offers customers an easy way to use a cloud service for testing applications written for browsers and real mobile devices," said Doug McNary, CEO of SmartBear, in a statement after the acquisition. "It has continued to win the trust of customers by building a reliable, affordable and easy-to-use automated testing cloud platform. "
McNary said SmartBear intends to maintain CrossBrowserTesting as a standalone service, while providing the resources and investments necessary to help CrossBrowserTesting build up its business operating as a standalone entity inside SmartBear.

 

 7 

A Platform for All Developers

Microsoft continues to enhance its. NET platform to support any developer writing any application on any platform. To that end, the software giant recently held dotnetConf , a virtual conference focused on. NET and where the general-purpose platform is going.. NET has several key features that are attractive to many developers, including automatic memory management and modern programming languages, and that make it easier to build high-quality apps more efficiently. During dotnetConf, Microsoft announced that. NET Core 1.0 will be released to manufacturing (RTM) on June 27.. NET Core is a cross-platform implementation of. NET that runs on Windows, with ports in progress for Linux, OS X and FreeBSD. Also, during the dotnetConf event, Xamarin announced a new stable release of the Xamarin Platform, which co-founder and CTO Miguel de Icaza said features the biggest and best release of Xamarin Studio yet. It has a type system that is now powered by Roslyn, Microsoft's open-source. NET compiler platform. This eWEEK slide show takes a look at some of the things Microsoft presented and where. NET is headed.

 

 8 

IBM Enhances Support for the Swift Programming Language

At WWDC, IBM extended its already-considerable support for the Swift programming language, particularly for using Swift for server-side development.
Apple's Swift programming language continues to gain popularity among developers and IBM, as a key Apple partner, is putting its considerable might behind the technology.
This week at Apple's Worldwide Developer Conference (WWDC), IBM announced new tooling and support for Swift, along with updates on the uptick in momentum Swift has seen at IBM and its developer community.
IBM has been creating mobile applications for its MobileFirst for iOS platform using Swift, but the company also is making strides in extending Swift for server-side development.
"From IBM's perspective , Swift on the server is already a global phenomenon," John Ponzo, an IBM fellow and vice president and CTO for IBM MobileFirst, wrote in a blog post. "This month, the number of code runs in the popular IBM Swift Sandbox topped 1.5 million. "
If you are not familiar with the Sandbox, it's a cloud environment IBM made public last December with the Swift.org launch, Ponzo said. At the time, IBM announced it would be participating in the new project to help extend Swift to the server and the company used its sandbox to test its code and shared access with others.
"This enabled developers, regardless of OS, who were interested in server-side Swift to give it a try without needing to stand up their own server," Ponzo said.
At last year's WWDC, Apple announced plans to open-source Swift and delivered it to the community last December. This week, the Swift.org community launched the first preview of Swift 3.0 .
Calling Swift "a game changer for enterprises," Phil Buckellew, vice president of Enterprise Mobile for the IBM Software Group, said IBM is the first cloud provider to enable the development of applications in native Swift.
"IBM has experienced the benefits of Swift on the cloud first-hand, and we are one of the largest digital agencies using Swift today with more than 100 enterprise apps developed in the language," Buckellew said in a blog post .
Adding to its potent support for Swift, IBM offered up two new capabilities. One is IBM Cloud Tools for Swift.
IBM Cloud Tools for Swift, a free app also known as ICT, provides Mac users with a simple interface for deploying, managing and monitoring end-to-end Swift applications, Brian White Eagle, an offering manager in the Mobile Innovation Lab, said in a blog post .
"The application integrates with tools designed by IBM Swift engineers to easily get started writing Swift on the server," White Eagle said in his post, which is a step-by-step guide for getting started with ICT.
IBM Cloud Tools for Swift simplifies the management and deployment of server-side assets, he said. It is a Mac Application that enables developers to group client-side and server-side code written in Swift; deploy the server-side code to IBM's Bluemix cloud platform and then manage projects using ICT.
Buckellew explained that for some Swift developers the key to productivity is working in the Xcode environment on a Mac. ICT simplifies the management and deployment of server-side assets in an environment complementary to Xcode.
"The developer experience is important to us, and we think developing Swift apps on the cloud should be simple and fast," he noted.
IBM also announced Swift on LinuxONE , IBM's Linux-based mainframe servers. Developers are now able to use  Swift on LinuxONE , Buckellew said.
"The safety, speed and expressiveness of Swift are now available with a level of performance and scale unmatched by any previous platform," he noted. "Having Swift on LinuxONE allows developers to do fit-for-purpose placement of workloads that need access to data in a high-performing, secure, reliable and scalable environment. "
Also, the IBM Swift Sandbox is now enabled with a beta driver of Swift on LinuxONE.
IBM introduced its Kitura Web Framework as an open-source technology in February at its InterConnect 2016 conference. Kitura enables the development of back-end portions of applications for Swift. Written in Swift, Kitura enables both mobile front-end and back-end portions of an application to be written in the same language, simplifying modern application development.
Buckellew cited the example of City Furniture, an IBM customer that used Swift for both client-side and server-side development. The furniture retailer created a mobile solution in just six weeks that enabled the company to transform clearance merchandise from a cost-recovery to a profitable product segment, he said.
"City Furniture recreated 90 percent of the functionality of their previous API with IBM's Swift server-side development packages using Kitura in a fraction of the time," Buckellew said.
Meanwhile, for its part, Apple this week announced Swift Playgrounds , a new app for the iPad that is designed to make learning to code in Swift easy and fun for beginners. Apple delivered a preview release of Swift Playgrounds at WWDC as part of the iOS 10 developer preview and it will be available with the iOS 10 public beta in July. The final version of Swift Playgrounds will be available in the App Store for free this fall.

 

 9 

Flaws in Free SSL Tool Allowed Attackers to Get SSL Certificates for Any Domain

StartCom, the CA (Certificate Authority) behind the StartSSL service, launched the StartEncrypt project on June 4, inspired by the success of the Let's Encrypt project.
Users that want to deploy free StartSSL certificates can take advantage of their StartEncrypt offering. They only need to download a Linux client which they're supposed to upload to their servers.
This client performs a domain validation process, informs the StartSSL service, which then issues and installs an "Extended Validation" SSL certificate for the domain it found running on the server it just checked.
According to CompuTest , this validation process is flawed, and through a few tricks allows server owners to receive SSL certificates issue for other domains, such as Facebook, Google, Dropbox, and others, which can be sold on the black market or used in man-in-the-middle attacks.
The first issue Alkemade discovered in the StartEncrypt client was a design-related problem because users could manually configure the folder from where the client would download a signature from the server.
An attacker would only have to point the tool at a folder on his server holding the signature of another domain. These domain signatures can be extracted from any sites that allow users to upload files: GitHub, Dropbox, etc..
The second issue is far more serious because it allowed an attacker to obtain SSL certificates for even more domains than the ones before.
According to the researcher, one of the API verification calls contains a parameter called "verifyRes" which takes a URL as input. This means the client was vulnerable to Open Redirect vulnerabilities, meaning that an attacker could forge this request and point the tool off-domain to a server not under his control.
But this feature is not that easily exploitable. The domain URL to which the attacker needs to point the tool must (1) allow users to upload files and serve them back in raw format; or (2) to contain an Open Redirect issue of its own.
While the first condition was quite rare, the second was not. All websites that support OAuth 2.0, a specification that powers social login features, must allow open redirects for the protocol to function properly.
A crook leveraging this OAuth 2.0 condition and the StartEncrypt client could fool the StartSSL service to issue a free SSL service in his name for any site that provides OAuth 2.0 support, such as Facebook, Twitter, Yahoo, Microsoft, and so on.
Additionally, CompuTest also discovered that StartEncrypt doesn't check its own server's certificate for validity when connecting to the API, meaning crooks could receive verification requests and issue false SSL certificates for users trying to use StartEncrypt.
The API also doesn't verify the content-type of the file it downloads for verification so attackers can obtain certificates in the name of third-party websites where users can upload their avatars, and the certificate private key, which must be private, is stored with 0666 permissions in a public folder so everyone could read it.
Furthermore, just like Let's Encrypt, StartEncrypt is vulnerable to a Duplicate-Signature Key Selection attack.
StartCom has released a new version of the StartEncrypt Linux client, with the same version number 1.0.0.1. CompuTest says they reported other issues to the service, which are still being corrected and will be fixed in future updates.
Back in March , StartSSL faced a similar issue with its general service which also allowed crooks to receive SSL certificates for domains they don't own.

 

 10 

Free Decrypter Available for Download for MIRCOP Ransomware

The MIRCOP ransomware appeared towards the end of June and had two unique features that made it stand apart from all the ransomware variants discovered each day.
One of them is its ransom note, which uses the masked Guy Fawkes figure, usually employed by Anonymous hackers. The ransom note has a threatening tone and tells the user to return stolen money or face payback, supposedly from the robbed Anonymous hacker.
The second feature was the exorbitant amount of money asked in the ransom note, which was 48.48 Bitcoin (~$32,000).
Three days later after Trend Micro and security researcher Nyxbone were revealing the presence of this new family, Gillespie had already put together a decrypter for this threat.
You can download the decrypter from here. Just unzip the file and run the application. The decrypter will leave the original encrypted files in place, just in case the decryption routine fails, so you can use it without fearing you'll lose your original files.
Once the decryption ends, you'll receive a notification message on your screen, like the one pictured below.
If you need help with the decrypter, Gillespie provides support for needy users on this Bleeping Computer forum thread .

 

 11 

Second Man Pleads Guilty in "The Fappening" Celebrity Hacking Scandal

Majerczyk was the second man accused of participating in The Fappening celebrity hacking scandal during which hackers have breached the personal email iCloud and Gmail accounts of 100 celebrities and leaked private images and videos, sometimes adult in nature.
Previously to naming Majerczyk, the FBI raided the home of Emilio Herrera, another Chicago native, on October 15, 2014. This past January it was revealed that the FBI conducted a mirror raid on the house of Majerczyk, on the same day.
Authorities said that Majerczyk registered the appleprivacysecurity@gmail.com email address, similar to the official appleprivacysecurity@icloud.com address. He then used this email address, to send spear-phishing emails to various celebrities.
The FBI said the suspect accessed 330 accounts over 600 different times, downloading sensitive material, from November 23, 2013, through August 2014.
Court documents mentioned the celebrities' initials: J. L., K. U., J. V. and A. L. The initials J. L. could stand for Jennifer Lawrence, K. U. for Kate Upton, and J. V. for Justin Verlander, all victims of The Fappening (or Celebgate).
According to his plea deal, once Majerczyk officially signs the document, he will face a statutory maximum sentence of five years in federal prison.
In March, the US Department of Justice announced that they charged Ryan Collins , 36, of Lancaster, Pennsylvania with hacking Apple and Gmail accounts of several celebrities between November 2012 and September 2014.
FBI said Collins hacked 50 iCloud accounts and 72 Gmail accounts. Collins later pleaded guilty and agreed to a recommended prison term of 18 months.
Authorities said that neither Majerczyk nor Collins were the ones that uploaded the pictures online. This could be the work of Herrera or a fourth suspect.

 

 12 

New Adwind RAT Campaign with Zero AV Detection Targets Businesses in Denmark

The campaign took place over the weekend and according to Heimdal Security experts, it only targeted Danish companies.
Regardless of its initial scope, all spam emails were written in English, so an expansion to other countries may not take more than the push of a button somewhere in the crook's control panel.
Adwind first appeared on the market bearing the name of Frutas RAT (January 2012) and rebranded several times as Unrecom RAT (February 2014), AlienSpy (October 2014), and most recently as JSocket RAT (June 2015). Most security firms still call it Adwind, the name under which it made the most casualties.
A Kaspersky report released in February 2016, after authorities managed to shut down the crook's operation, revealed that the group behind this malware sold their toolkit to 1,800 other criminals, who then infected over 443,000 victims.
Crooks were delivering their malware in order to infect computers belonging to Danish companies.
The Adwind RAT would then open a backdoor on these infected systems and allow the crooks to take over devices, search for sensitive information and then exfiltrate it via various channels.
All computers were also added to a global botnet, which the malware's operator could have used to send spam or launch DDoS attacks if he wanted. Heimdal's team detected over eleven C&C servers used in this latest campaign.
"Online criminals seem to be turning their attention to more targeted attacks that require a smaller infrastructure to carry out. This means less resources put into building infrastructure and a potentially bigger return on investment because of the targeted nature of the strike," Heimdal's Andra Zaharia explains.
"Avoiding large-scale campaigns also means thay have a higher chance of going undetected. This gives them more time to sit on the infected systems and extract more data from them. "

 

 13 

Video Compares Samsung Galaxy Note 7 Grace UX to Galaxy S7 UI

The Grace UX was compared to the Galaxy S7 UI on a video posted on YouTube by XEETECHCARE. The Grace UX is expected to come to the Galaxy S7, but it’s unknown when Samsung will start rolling out the UX.
The design overall seems to be the same, the lockscreen has the same icons and the overall look of the home screen doesn’t seem to have changed in Grace UX. Design changes are visible when it comes to icons, they were made more consistent and come with a discreet shade.
In addition, Samsung has unified the Action Memo or S Note into one application called simply Notes. Moreover, the blue light filter was added in the status bar toggles. It’s a night mode feature that allows users to adjust the opacity of the screen. Samsung added voice search in the quick setting area, so that users can find apps and information on their phones faster. Other minor changes are also visible in the Grace UX, including an update in the Settings UI.

 

 14 

Huawei Releases Photo Captured With a DSLR, Implies It Was Taken With Its P9

Huawei took to its Google+ account in order to promote the P9 and released an image, while implying that it was taken with P9 rear cameras. The post didn’t actually say that it was taken with the P9, but it certainly implied so.
Google+ allows users to see the EXIF data of any image uploaded on the platform, just like on other photo-sharing platforms like Flickr. The EXIF data showed that the image was actually taken with a Canon EOS 5D Mark III camera, which sells for about $4,500. The details reveal that the lens were EF70-200mm f/2.8L IS II USM, focal point 135mm, exposure 1/800, while exposure bias reached -1 EV.
The level of detail in the image is quite stunning and while some changes could have been made during editing sessions, it’s still too good of a photo to be made with a smartphone camera. The post on Google+ reads “We managed to catch a beautiful sunrise with Deliciously Ella. The #HuaweiP9 ’s dual Leica cameras makes taking photos in low light conditions like this a pleasure. Reinvent smartphone photography and share your sunrise pictures with us. #OO ”.
Huawei seems to have taken the post down and should release a statement soon enough, considering that the post on Google+ is truly misleading.

 

 15 

Oracle will pay HP $3 billion over Itanium server software

Oracle has been ordered to pay HP $3 billion after losing a lawsuit with the company regarding the software development for its Itanium servers.
During the lawsuit, HP claimed that Oracle had violated a contract by continuing to develop support software for its Itanium chip. The trial ran for one month in a California state court in San Jose ending with HP being granted the original amount it claimed at the beginning of the case.
In 2001, HP developed the Itanium chip in partnership with Intel. Oracle and HP then signed a contract that would see it writing software for the company’s servers. However it decided to back out of the deal when the high-end servers failed to live up to their initially high expectations.
HP believed that Oracle’s decision to pull out of the agreement affected its business and led its customers to distance themselves from the Itanium platform. In 2012, a Santa Clara Superior Court ordered the software developer to resume working with HP to support its Itanium line of chips. HP argued that it had already been affected by Oracle’s actions and this led to the case between the two companies.
The general counsel of HP Enterprise (HPE), John Schultz defended the jury’s verdict, saying: "HP is gratified by the jury’s verdict, which affirms what HP has always known and the evidence overwhelmingly showed". He also made the point that Oracle’s decision to abandon making software for Itanium "was a clear breach of contract".
Oracle responded to the outcome of the case, saying: "Oracle never believed it had a contract to continue to port our software to Itanium indefinitely and we do not believe so today... Oracle has been providing all its latest software for the Itanium systems since the original ruling while HP and Intel stopped developing systems years ago".
The company’s general counsel, Dorian Daley still believes that Itanium was on its way out, saying: "Two trials have now demonstrated clearly that the Itanium chip was nearing end of life, HP knew it, and was actively hiding that fact from its customers".
Published under license from ITProPortal.com, a Net Communities Ltd Publication. All rights reserved.
Photo Credit:   Andy Dean Photography / Shutterstock

 

 16 

KDE Plasma Wayland Image Now Built on KDE Neon Infrastructure, Qt 5.7 Is Coming

What's new in the updated KDE Plasma Wayland images? Well, first of all, it looks like these are the first ISOs to be built on the KDE Neon infrastructure, which means that it comes with the latest KDE Plasma 5 desktop environment, KDE Applications, and KDE Framework packages from the Git master branch.
Second of all, it is now possible to backspace in the Konsole terminal emulator (see the attached screenshot for details). In the meantime, it appears that Jonathan Riddell and his team of developers are working hard to bring even more newer technologies, such as the Qt 5.7 GUI toolkit.
"It’s time to check in on the Plasma Wayland image for an update. Built on Neon infrastructure, this comes with the latest from KDE Git master for crack of the day fun," said Jonathan Riddell in today's announcement. "Actually, it’s from the end of last week because we paused updates while we add Qt 5.7, but it’s close enough. "
Some bugs are still present in this new ISO refresh of the KDE Plasma Wayland image, such as the annoying blue colored window header, which appears and disappears randomly. Until a new ISO image will be generated, you can download today's KDE Plasma Wayland image and take it for a test drive.
For those of you not in the known, we inform them that these KDE Plasma Wayland image feature a full-fledged KDE Plasma 5 desktop environment on the next-generation Wayland display server that will soon become the norm for numerous GNU/Linux operating systems, replacing the old X. Org (X11). More details about the KDE Plasma Wayland port should be available on the official website.

 

 17 

OnePlus Releases OxygenOS 3.2.0 for OnePlus 3 with RAM Management

The update brings enabled sRGB mode in developer options, as well as improved RAM and GPS management. Moreover, OnePlus has enhanced audio playback quality and updated the custom icon packs on the OnePlus 3.
In addition, some issues that users had with notifications were fixed and camera quality and functionality were also improved. OnePlus announced that the company improved issues in the Gallery and added the latest Google security patches, so that personal information on the phone would be safe. Some bugs in the clock/music apps were also fixed.
The size of the update is quite large, 396MB but it should contain all the improvements that many users have patiently awaited. One of the biggest issues with the OnePlus 3 regarded RAM management, especially since the 6GB of RAM in the phone was the main selling point.
The update to OxygenOS already started rolling out and should arrive to devices within 48 hours.

 

 18 

Telstra boosts software capabilities with Readify buy

Software development company Readify has been snapped up Telstra.
The Microsoft partner has some 200 staff, including 160 developers, Telstra said.
The telco said Readify would complement the capabilities it acquired when it bought Kloud. Telstra announced in January that Kloud , which offers cloud-migration services to enterprises, would join its Network Applications and Services business.
“As we know, apps and software in general are playing an increasingly important role in businesses,” Telstra executive director, global enterprise and services, Michelle Bendschneider, said in a statement.
“Readify is recognised globally for its innovative software solutions and will further help us create software-led digital transformations with our customers.”
“Readify has a proven record of creating innovative solutions with our customers and, with Telstra’s scale, the opportunities are tremendously exciting,” said Readify managing director Graeme Strange.
In its announcement Telstra pointed to a number of other acquisitions it had made that have augmented its technology capabilities, including the acquisition of unified communications and contact centre integrator NSC in 2013, 2 Networks in January 2014 , and Queensland-based systems integrator Bridgepoint in October 2014.
CEO Andrew Penn has previously described the Telstra’s vision of becoming a “world class technology company that empowers people to connect”.
“[W]hen I say that our vision is to become a world class technology company, that does not mean to say that we expect to become Microsoft or Google,” the CEO told an investor briefing earlier this year.
“The point is, Telstra, through our capabilities and partnerships, is in a position to provide a window into the best of technology available today and deliver it across our networks to our customers. And there is virtually no technology innovation today that is happening that does not rely on the underlying network”
Tags software development readify software developers Telstra
More about Bridgepoint Google Microsoft Readify

 

 19 

'On-the-fly' 3-D print system prints what you design, as you design it -- ScienceDaily

But what if you decide to make changes? You may have to go back, change the design and print the whole thing again, perhaps more than once. So Cornell researchers have come up with an interactive prototyping system that prints what you are designing as you design it; the designer can pause anywhere in the process to test, measure and, if necessary, make changes that will be added to the physical model still in the printer.
"We are going from human-computer interaction to human-machine interaction," said graduate student Huaishu Peng, who described the On-the-Fly-Print system in a paper presented at the 2016 ACM Conference for Human Computer Interaction. Co-authors are François Guimbretière, associate professor of information science; Steve Marschner, professor of computer science; and doctoral student Rundong Wu.
Their system uses an improved version of an innovative "WirePrint" printer developed in a collaboration between Guimbretière's lab and the Hasso Platner Institute in Potsdam, Germany.
In conventional 3-D printing, a nozzle scans across a stage depositing drops of plastic, rising slightly after each pass to build an object in a series of layers. With the WirePrint technique the nozzle extrudes a rope of quick-hardening plastic to create a wire frame that represents the surface of the solid object described in a computer-aided design (CAD) file. WirePrint aimed to speed prototyping by creating a model of the shape of an object instead of printing the entire solid. The On-the-Fly-Print system builds on that idea by allowing the designer to make refinements while printing is in progress.
The new version of the printer has "five degrees of freedom. " The nozzle can only work vertically, but the printer's stage can be rotated to present any face of the model facing up; so an airplane fuselage, for example, can be turned on its side to add a wing. There is also a cutter to remove parts of the model, say to give the airplane a cockpit.
The nozzle has been extended so it can reach through the wire mesh to make changes inside. A removable base aligned by magnets allows the operator to take the model out of the printer to measure or test to see if it fits where it's supposed to go, then replace it in the precise original location to resume printing.
The software -- a plug-in to a popular CAD program -- designs the wire frame and sends instructions to the printer, allowing for interruptions. The designer can concentrate on the digital model and let the software control the printer. Printing can continue while the designer works on the CAD file, but will resume when that work is done, incorporating the changes into the print.
As a demonstration the researchers created a model for a toy airplane to fit into a Lego airport set. This required adding wings, cutting out a cockpit for a Lego pilot and frequently removing the model to see if the wingspan is right to fit on the runway. The entire project was completed in just 10 minutes.
By creating a "low-fidelity sketch" of what the finished product will look like and allowing the designer to redraw it as it develops, the researchers said, "We believe that this approach has the potential to improve the overall quality of the design process. "
A video can be found here: https://www.youtube.com/watch?v=X68cfl3igKE

 

 20 

Living in the '90s? So are underwater wireless networks: Engineers are speeding them up to improve tsunami detection, walkie-talkies for scuba divers, and search-and-rescue work -- ScienceDaily

The flashback is due to the speed of today's underwater communication networks, which is comparable to the sluggish dial-up modems from America Online's heyday. The shortcoming hampers search-and-rescue operations, tsunami detection and other work.
But that is changing due in part to University at Buffalo engineers who are developing hardware and software tools to help underwater telecommunication catch up to its over-the-air counterpart.
Their work, including ongoing collaborations with Northeastern University, is described in a study -- Software-Defined Underwater Acoustic Networks: Toward a High-Rate Real-Time Reconfigurable Modem -- published in November in IEEE Communications Magazine .
"The remarkable innovation and growth we've witnessed in land-based wireless communications has not yet occurred in underwater sensing networks, but we're starting to change that," says Dimitris Pados, PhD, Clifford C. Furnas Professor of Electrical Engineering in the School of Engineering and Applied Sciences at UB, a co-author of the study.
The amount of data that can be reliably transmitted underwater is much lower compared to land-based wireless networks. This is because land-based networks rely on radio waves, which work well in the air, but not so much underwater.
As a result, sound waves (such as the noises dolphins and whales make) are the best alternative for underwater communication. The trouble is that sound waves encounter such obstacles as path loss, delay and Doppler which limit their ability to transmit. Underwater communication is also hindered by the architecture of these systems, which lack standardization, are often proprietary and not energy-efficient. Pados and a team of researchers at UB are developing hardware and software -everything from modems that work underwater to open-architecture protocols -- to address these issues. Of particular interest is merging a relatively new communication platform, software-defined radio, with underwater acoustic modems.
Traditional radios, such as an AM/FM transmitter, operate in a limited bandwidth (in this case, AM and FM). The only way to pick up additional signals, such as sound waves, is to take the radio apart and rewire it. Software-defined radio makes this step unnecessary. Instead, the radio is capable via computer of shifting between different frequencies of the electromagnetic spectrum. It is, in other words, a "smart" radio.
Applying software-defined radio to acoustic modems could vastly improve underwater data transmission rates. For example, in experiments last fall in Lake Erie, just south of Buffalo, New York, graduate students from UB proved that software-defined acoustic modems could boost data transmission rates by 10 times what today's commercial underwater modems are capable of.
The potential applications for such technology includes:

 

 21 

Mapping software tracks threats to endangered species: Software helps conservationists predict species movement -- ScienceDaily

The Duke team used the software and images to assess recent forest loss restricting the movement of Peru's critically endangered San Martin titi monkey ( Callicebus oenanthe ) and identify the 10 percent of remaining forest in the species' range that presents the best opportunity for conservation.
"Using these tools, we were able to work with a local conservation organization to rapidly pinpoint areas where reforestation and conservation have the best chance of success," said Danica Schaffer-Smith, a doctoral student at Duke's Nicholas School of the Environment, who led the study. "Comprehensive on-the-ground assessments would have taken much more time and been cost-prohibitive given the inaccessibility of much of the terrain and the fragmented distribution and rare nature of this species. "
The San Martin titi monkey inhabits an area about the size of Connecticut in the lowland forests of north central Peru. It was recently added to the International Union for Conservation of Nature's list of the 25 most endangered primates in the world.
Increased farming, logging, mining and urbanization have fragmented forests across much of the monkey's once-remote native range and contributed to an estimated 80 percent decrease in its population over the last 25 years.
Titi monkeys travel an average of 663 meters a day, primarily moving from branch to branch to search for food, socialize or escape predators. Without well-connected tree canopies, they're less able to survive local threats and disturbances, or recolonize in suitable new habitats. The diminutive species, which typically weighs just two to three pounds at maturity, mate for life and produce at most one offspring a year. Mated pairs are sometimes seen intertwining their long tails when sitting next to each other.
Armed with Aster and Landsat satellite images showing the pace and extent of recent forest loss, and GeoHAT, a downloadable geospatial habitat assessment toolkit developed at Duke, Schaffer-Smith worked with Antonio Bóveda-Penalba, program coordinator at the Peruvian NGO Proyecto Mono Tocón, to prioritize where conservation efforts should be focused.
"The images and software, combined with Proyecto Mono Tocón's detailed knowledge of the titi monkey's behaviors and habitats, allowed us to assess which patches and corridors of the remaining forest were the most critical to protect," said Jennifer Swenson, associate professor of the practice of geospatial analysis at Duke, who was part of the research team.
The team's analysis revealed that at least 34 percent of lowland forests in the monkey's northern range, Peru's Alto Mayo Valley, have been lost. It also showed that nearly 95 percent of remaining habitat fragments are likely too small and poorly connected to support viable populations; and less than 8 percent of all remaining suitable habitats lie within existing conservation areas.
Areas the model showed had the highest connectivity comprise just 10 percent of the remaining forest in the northern range, along with small patches elsewhere. These forests present the best opportunities for giving the highly mobile titi monkey the protected paths for movement it needs to survive.
Based on this analysis, the team identified a 10-kilometer corridor between Peru's Morro de Calzada and Almendra conservation areas as a high priority for protection.
"For many rare species threatened by active habitat loss, the clock is literally ticking," Schaffer-Smith said. "Software tools like GeoHAT -- or similar software such as CircuitScape -- can spell the difference between acting in time to save them or waiting till it's too late. "
Schaffer-Smith, Swenson and Bóveda-Penalba published their peer-reviewed research March 16 in the journal Environmental Conservation.
GeoHAT is a suite of ArcGIS geoprocessing tools designed to evaluate overall habitat quality and connectivity under changing land-use scenarios. It was developed by John Fay, an instructor in the Geospatial Analysis Program at Duke's Nicholas School, and can be used to assess habitats for a wide range of land-based species. (Learn More: http://sites.duke.edu/johnfay/projects/geohat/ )

 

 22 

Tracking brain atrophy in MS could become routine, thanks to new software -- ScienceDaily

That may be changing. Starting next month, University at Buffalo researchers will be testing in the U. S., Europe, Australia and Latin America a new software tool they developed that could make assessing brain atrophy part of the clinical routine for MS patients. The research is funded by Novartis, as part of its commitment to advance the care for people with MS with effective treatments and tools for assessment of disease activity.
According to the UB researchers, being able to routinely measure how much brain atrophy has occurred would help physicians better predict how a patient's disease will progress. It could also provide physicians with more information about how well MS treatments are working in individual patients. These and other benefits were outlined in a recent review study the researchers published in Expert Review of Neurotherapeutics.
"Measuring brain atrophy on an annual basis will allow clinicians to identify which of their patients is at highest risk for physical and cognitive decline," said Robert Zivadinov, MD, PhD, professor of neurology and director of the Buffalo Neuroimaging Analysis Center in the Jacobs School of Medicine and Biomedical Sciences at UB. Over the past 10 years, he and his colleagues at UB, among the world's most prolific groups studying brain atrophy and MS, developed the world's largest database of magnetic resonance images of individuals with MS, consisting of 20,000 brain scans with data from about 4,000 MS patients. The new tool, Neurological Software Tool for Reliable Atrophy Measurement in MS, or NeuroSTREAM, simplifies the calculation of brain atrophy based on data from routine magnetic resonance images and compares it with other scans of MS patients in the database.
More than lesions
Without measuring brain atrophy, clinicians cannot obtain a complete picture of how a patient's disease is progressing, Zivadinov said.
"MS patients experience, on average, about three to four times more annual brain volume loss than a healthy person," he said. "But a clinician can't tell a patient, 'You have lost this amount of brain volume since your last visit.'"
Instead, clinicians rely primarily on the presence of brain lesions to determine how MS is progressing. "Physicians and radiologists can easily count the number of new lesions on an MRI scan," said Zivadinov, "but lesions are only part of the story related to development of disability in MS patients. "
And even though MS drugs can stop lesions from forming, in many cases brain atrophy and the cognitive and physical decline it causes will continue, the researchers say.
"While the MS field has to continue working on solving challenges related to brain atrophy measurement on individual patient level, its assessment has to be incorporated into treatment monitoring, because in addition to assessment of lesions, it provides an important additional value in determining or explaining the effect of disease-modifying drugs," Zivadinov and co-authors wrote in a June 23 editorial that was part of a series of commentaries in Multiple Sclerosis Journal addressing the pros and cons of using brain atrophy to guide therapy monitoring in MS.
Soon, the UB researchers will begin gathering data to create a database of brain volume changes in more than 1,000 patients from 30 MS centers in the U. S. and around the world. The objective is to determine if NeuroSTREAM can accurately quantify brain volume changes in MS patients.
The software runs on a user-friendly, cloud-based platform that provides compliance with privacy health regulations such as HIPAA. It is easily available from workstations, laptops, tablets, iPads and smartphones. The ultimate goal is to develop a user-friendly website to which clinicians can upload anonymous scans of patients and receive real-time feedback on what the scans reveal.
NeuroSTREAM measures brain atrophy by measuring a certain part of the brain, called the lateral ventricular volume (LVV), one of the brain structures that contains cerebrospinal fluid. When atrophy occurs, the LVV expands.
Canary in the coal mine
"The ventricles are a surrogate measure of brain atrophy," said Michael G. Dwyer III, PhD, assistant professor in the Department of Neurology and the Department of Bioinformatics in the Jacobs School of Medicine and Biomedical Sciences at UB. "They're the canary in the coal mine. "
Dwyer, a computer scientist and director of technical imaging at the Buffalo Neuroimaging Analysis Center, is principal investigator on the NeuroSTREAM software development project. At the American Academy of Neurology meeting in April, he reported preliminary results showing that NeuroSTREAM provided a feasible, accurate, reliable and clinically relevant method of measuring brain atrophy in MS patients, using LVV.
"Usually, you need high-resolution research-quality brain scans to do this," Dwyer explained, "but our software is designed to work with low resolution scans, the type produced by the MRI machines normally found in clinical practice. "
To successfully measure brain atrophy in a way that's meaningful for treatment, Zivadinov explained, what's needed is a normative database through which individual patients can be compared to the population of MS patients. "NeuroSTREAM provides context, because it compares a patient's brain not just to the general population but to other MS patients," said Dwyer.

 

 23 

Diagnosing ear infection using smartphone -- ScienceDaily

"Because of lack of health personnel in many developing countries, ear infections are often misdiagnosed or not diagnosed at all. This may lead to hearing impairments, and even to life-threatening complications," says Claude Laurent, researcher at the Department of Clinical Sciences at Umeå University and co-author of the article. "Using this method, health personnel can diagnose middle ear infections with the same accuracy as general practitioners and paediatricians. Since the system is cloud-based, meaning that the images can be uploaded and automatically analysed, it provides rapid access to accurate and low-cost diagnoses in developing countries. "
The researchers at Umeå University have collaborated with the University of Pretoria in South Africa in their effort to develop an image-processing technique to classify otitis media. The technique was recently described in the journal EBioMedicine -- a new Lancet publication.
The software system consists of a cloud-based analysis of images of the eardrum taken using an otoscope, which is an instrument normally used in the medical examination of ears. Images of eardrums, taken with a digital otoscope connected to a smartphone, were compared to high-resolution images in an archive and automatically categorised according to predefined visual features associated with five diagnostic groups.
Tests showed that the automatically generated diagnoses based on images taken with a commercial video-otoscope had an accuracy of 80.6 per cent, while an accuracy of 78.7 per cent was achieved for images captured on-site with a low cost custom-made video-otoscope. This high accuracy can be compared with the 64-80 per cent accuracy of general practitioners and paediatricians using traditional otoscopes for diagnosis.
"This method has great potential to ensure accurate diagnoses of ear infections in countries where such opportunities are not available at present. Since the method is both easy and cheap to use, it enables rapid and reliable diagnoses of a very common childhood illness," says Claude Laurent.

 

 24 

Investigating world’s oldest human footprints with software designed to decode crime scenes -- ScienceDaily

The software was developed as part of a Natural Environments Research Council (NERC) Innovation Project awarded to Professor Matthew Bennett and Dr Marcin Budka in 2015 for forensic footprint analysis. They have been developing techniques to enable modern footwear evidence to be captured in three-dimensions and analysed digitally to improve crime scene practice.
Footprints reveal much about the individuals who made them; their body mass, height and their walking speed. "Footprints contain information about the way our ancestors moved," explains Professor Bennett. "The tracks at Laetoli are the oldest in the world and show a line of footprints from our early ancestors, preserved in volcanic ash. They provide a fascinating insight into how early humans walked. The techniques we have been developing for use at modern crime scenes can also reveal something new about these ancient track sites. "
The Laetoli tracks were discovered by Mary Leakey in 1976 and are thought to be around 3.6 million years old. There are two parallel trackways on the site, where two ancient hominins walked across the surface. One of these trackways was obscured when a third person followed the same path. The merged trackway has largely been ignored by scientists over the last 40 years and the fierce debate about the walking style of the track-makers has predominately focused on the undisturbed trackway.
By using the software developed through the NERC Innovation Project Professor Bennett and his colleagues have been able to decouple the tracks of this merged trail and reveal for the first time the shape of the tracks left by this mysterious third track-maker. There is also an intriguing hint of a fourth track-maker at the site.
"We're really pleased that we can use our techniques to capture new data from these extremely old footprints," says Dr Marcin Budka who developed the software used in the study.
"It means that we have effectively doubled the information that the palaeo-anthropological community has available for study of these hominin track-makers," continues Dr Reynolds one of the co-authors of the study.
"As well as making new discoveries about our early ancestors, we can apply this science to help modern society combat crime. By digitising tracks at a crime scene we can preserve, share and study this evidence more easily," says Sarita Morse who helped conceive the original analysis.
For more information, please see the following video: https://www.youtube.com/watch?v=Rl8odSqoDZc

 

 25 

New technique wipes out unwanted data -- ScienceDaily

To do this, software programs in these systems calculate predictive relationships from massive amounts of data. The systems identify these predictive relationships using advanced algorithms -- a set of rules for solving math problems -- and "training data. " This data is then used to construct the models and features that enable a system to determine the latest best-seller you wish to read or to predict the likelihood of rain next week.
This intricate process means that a piece of raw data often goes through a series of computations in a system. The computations and information derived by the system from that data together form a complex propagation network called the data's "lineage. " The term was coined by Yinzhi Cao, an assistant professor of computer science and engineering, and his colleague, Junfeng Yang of Columbia University, who are pioneering a novel approach to make learning systems forget.
Considering how important this concept is to increasing security and protecting privacy, Cao and Yang believe that easy adoption of forgetting systems will be increasingly in demand. The two researchers have developed a way to do it faster and more effectively than can be done using current methods.
Their concept, called "machine unlearning," is so promising that Cao and Yang have been awarded a four-year, $1.2 million National Science Foundation grant to develop the approach.
"Effective forgetting systems must be able to let users specify the data to forget with different levels of granularity," said Cao, a principal investigator on the project. "These systems must remove the data and undo its effects so that all future operations run as if the data never existed. "
Increasing security and privacy protection
There are a number of reasons why an individual user or service provider might want a system to forget data and its complete lineage. Privacy is one.
After Facebook changed its privacy policy, many users deleted their accounts and the associated data. The iCloud photo hacking incident in 2014 -- in which hundreds of celebrities' private photos were accessed via Apple's cloud services suite -- led to online articles teaching users how to completely delete iOS photos including the backups. New research has revealed that machine learning models for personalized medicine dosing leak patients' genetic markers. Only a small set of statistics on genetics and diseases are enough for hackers to identify specific individuals, despite cloaking mechanisms.
Naturally, users unhappy with these newfound risks want their data, and its influence on the models and statistics, to be completely forgotten.
Security is another reason. Consider anomaly-based intrusion detection systems used to detect malicious software. In order to positively identify an attack, the system must be taught to recognize normal system activity. Therefore the security of these systems hinges on the model of normal behaviors extracted from the training data. By polluting the training data, attackers pollute the model and compromise security. Once the polluted data is identified, the system must completely forget the data and its lineage in order to regain security.
Widely used learning systems such as Google Search are, for the most part, only able to forget a user's raw data -- and not the data's lineage -- upon request. This is problematic for users who wish to ensure that any trace of unwanted data is removed completely, and it is also a challenge for service providers who have strong incentives to fulfill data removal requests and retain customer trust.
Service providers will increasingly need to be able to remove data and its lineage completely to comply with laws governing user data privacy, such as the "right to be forgotten" ruling issued in 2014 by the European Union's top court. In October 2014, Google removed more than 170,000 links to comply with the ruling, which affirmed users' right to control what appears when their names are searched. In July 2015, Google said it had received more than a quarter-million such requests.
Breaking down dependencies
Building on work that was presented at a 2015 IEEE Symposium and then published, Cao and Yang's "machine unlearning" method is based on the fact that most learning systems can be converted into a form that can be updated incrementally without costly retraining from scratch.
Their approach introduces a layer of a small number of summations between the learning algorithm and the training data to eliminate dependency on each other. So, the learning algorithms depend only on the summations and not on individual data. Using this method, unlearning a piece of data and its lineage no longer requires rebuilding the models and features that predict relationships between pieces of data. Simply recomputing a small number of summations would remove the data and its lineage completely -- and much faster than through retraining the system from scratch.
Cao believes he and Yang are the first to establish the connection between unlearning and the summation form.
And, it works. Cao and Yang tested their unlearning approach on four diverse, real-world systems: LensKit, an open-source recommendation system; Zozzle, a closed-source JavaScript malware detector; an open-source OSN spam filter; and PJScan, an open-source PDF malware detector.
The success of these initial evaluations has set the stage for the next phases of the project, which include adapting the technique to other systems and creating verifiable machine unlearning to statistically test whether unlearning has indeed repaired a system or completely wiped out unwanted data.
In their paper's introduction, Cao and Yang say that "machine unlearning" could play a key role in enhancing security and privacy and in our economic future:
"We foresee easy adoption of forgetting systems because they benefit both users and service providers. With the flexibility to request that systems forget data, users have more control over their data, so they are more willing to share data with the systems. More data also benefit the service providers, because they have more profit opportunities and fewer legal risks.
"We envision forgetting systems playing a crucial role in emerging data markets where users trade data for money, services, or other data because the mechanism of forgetting enables a user to cleanly cancel a data transaction or rent out the use rights of her data without giving up the ownership. "

 

 26 

FloSIS: A super-fast network flow capture system for efficient flow retrieval -- ScienceDaily

Network packet capture performs essential functions in modern network management such as attack analysis, network troubleshooting, and performance debugging. As the network edge bandwidth currently exceeds 10 Gbps, the demand for scalable packet capture and retrieval is rapidly increasing. However, existing software-based packet capture systems neither provide high performance nor support flow-level indexing for fast query response. This would either prevent important packets from being stored or make it too slow to retrieve relevant flows.
A research team led by Professor KyoungSoo Park and Professor Yung Yi of the School of Electrical Engineering at Korea Advanced Institute of Science and Technology (KAIST) have recently presented FloSIS, a highly scalable software-based network traffic capture system that supports efficient flow-level indexing for fast query response.
FloSIS is characterized by three key advantages. First, it achieves high-performance packet capture and disk writing by exercising full parallelism in computing resources such as network cards, CPU cores, memory, and hard disks. It adopts the PacketShader I/O Engine (PSIO) for scalable packet capture and performs parallel disk writes for high-throughput flow dumping. Towards high zero-drop performance, it strives to minimize the fluctuation of packet processing latency.
Second, FloSIS generates two-stage flow-level indexes in real time to reduce the query response time. The indexing utilizes Bloom filters and sorted arrays to quickly reduce the search space of a query. Also, it is designed to consume only a small amount of memory while allowing flexible queries with wildcards, ranges of connection tuples, and flow arrival times.
Third, FloSIS supports flow-level content deduplication in real time for storage savings. Even with deduplication, the system still records the packet-level arrival time and headers to provide the exact timing and size information. For an HTTP connection, FloSIS parses the HTTP response header and body to maximize the hit rate of deduplication for HTTP objects.
These design choices bring enormous performance benefits. On a server machine with dual octa-core CPUs, four 10Gbps network interfaces, and 24 SATA disks, FloSIS achieves up to 30 Gbps for packet capture and disk writing without a single packet drop. Its indexes take up only 0.25% of the stored content while avoiding slow linear disk search and redundant disk access. On a machine with 24 hard disks of 3 TB, this translates into 180 GB for 72 TB total disk space, which could be managed entirely in memory or stored into solid state disks for fast random access. Finally, FloSIS deduplicates 34.5% of the storage space for 67 GB of a real traffic trace only with 256 MB of extra memory consumption for a deduplication table. In terms of performance, it achieves about 15 Gbps zero-drop throughput with real-time flow deduplication.
This work is presented at 2015 USENIX Annual Technical Conference (ATC) on July 10 2015 in Santa Clara, California.

 

 27 

Tool chain for real-time programming -- ScienceDaily

More and more safety-critical embedded electronic solutions are based on rapid, energy-efficient multi-core processors. "Two of the most important requirements of future applications are an increased performance in real time and further reduction of costs without adversely affecting functional safety," Professor Jürgen Becker of the Institute for Information Processing Technology (ITIV) of KIT says, who coordinates ARGO. "For this, multi-core processors have to make available the required performance spectrum at minimum energy consumption in an automated and efficiently programmed manner. "
Multi-core systems are characterized by the accommodation of several processor cores on one chip. The cores work in parallel and, hence, reach a higher speed and performance. Programming of such heterogeneous multi-core processors is very complex. Moreover, the programs have to be tailored precisely to the target hardware and to fulfill the additional real-time requirements. The ARGO EU research project, named after the very quick vessel in Greek mythology, is aimed at significantly facilitating programming by automatic parallelization of model-based applications and code generation. So far, a programmer had to adapt his code, i.e. the instructions for the computer, to the hardware architecture, which is associated with a high expenditure and prevents the code from being transferred to other architectures.
"Under ARGO, a new standardizable tool chain for programmers is being developed. Even without precise knowledge of the complex parallel processor hardware, the programmers can control the process of automatic parallelization in accordance with the requirements. This results in a significant improvement of performance and a reduction of costs," Becker says.
In the future, the ARGO tool chain can be used to manage the complexity of parallelization and adaptation to the target hardware in a largely automated manner with a small expenditure. Under the project, real-time-critical applications in the areas of real-time flight dynamics simulation and real-time image processing are studied and evaluated by way of example.

 

 28 

Data scientists launch free tools to analyze online trends, memes: Web-based software provides journalists, researchers and public direct access to sophisticated meme-tracking algorithms -- ScienceDai

The power to explore online social media movements -- from the pop cultural to the political -- with the same algorithmic sophistication as top experts in the field is now available to journalists, researchers and members of the public from a free, user-friendly online software suite released by scientists at Indiana University.
The Web-based tools, called the Observatory on Social Media, or "OSoMe" (pronounced "awesome"), provide anyone with an Internet connection the power to analyze online trends, memes and other online bursts of viral activity.
An academic pre-print paper on the tools is available in the open-access journal PeerJ .
"This software and data mark a major goal in our work on Internet memes and trends over the past six years," said Filippo Menczer, director of the Center for Complex Networks and Systems Research and a professor in the IU School of Informatics and Computing. The project is supported by nearly $1 million from the National Science Foundation.
"We are beginning to learn how information spreads in social networks, what causes a meme to go viral and what factors affect the long-term survival of misinformation online," Menczer added. "The observatory provides an easy way to access these insights from a large, multi-year dataset. "
The new tools are:
By plugging #thedress into the system, for example, OSoMe will generate an interactive graph showing connections between both the hashtag and the Twitter users who participated in the debate over a dress whose color -- white and gold or blue and black -- was strangely ambiguous. The results show more people tagged #whiteandgold compared to #blueandblack.
For the Ice Bucket Challenge, another widespread viral phenomenon -- in which people doused themselves in cold water to raise awareness about ALS -- the software generates an interactive graph showing how many people tweeted #icebucketchallenge at specific Twitter users, including celebrities.
One example illustrates a co-occurrence network, in which a single hashtag comprises a "node" with lines showing connections to other related hashtags. The larger the node, the more popular the hashtag. The other example illustrates a diffusion network, in which Twitter users show up as points on a graph, and retweets or mentions show up as connecting lines. The larger a cluster of people tweeting a meme -- or the more lines showing retweets and mentions -- the more viral the topic.
OSoMe's social media tools are supported by a growing collection of 70 billion public tweets. The long-term infrastructure to store and maintain the data is provided by the IU Network Science Institute and High Performance Computing group at IU. The system does not provide direct access to the content of these tweets.
The group that manages the infrastructure to store this data is led by Geoffrey Fox, Distinguished Professor in the School of Informatics and Computing. The group whose software analyzes the data is led by Judy Qiu, an associate professor in the school.
"The collective production, consumption and diffusion of information on social media reveals a significant portion of human social life -- and is increasingly regarded as a way to 'sense' social trends," Qiu said. "For the first time, the ability to explore 'big social data' is open not just to individuals with programming skills but everyone as easy-to-use visual tools. "
In addition to pop culture trends, Menczer said, OSoMe provides insight to many other subjects, including social movements or politics, as the online spread of information plays an increasingly important role in modern communication.
The IU researchers who created OSoMe also launched another tool, BotOrNot, in 2014. BotOrNot predicts the likelihood that a Twitter account is operated by a human or a "social bot. " Bots are online bits of code used to create the impression that a real person is tweeting about a given topic, such as a product or a person.
The OSoMe project also provides an application program interface, or API, to help other researchers expand upon the tools, or create "mash-ups" that combine its powers with other software or data sources.

 

 29 

Internet of things: Closing security gaps in internet-connected household -- ScienceDaily

In future, many everyday items will be connected to the Internet and, consequently, become targets of attackers. As all devices run different types of software, supplying protection mechanisms that work for all poses a significant challenge.
This is the objective pursued by the Bochum-based project "Leveraging Binary Analysis to Secure the Internet of Things," short Bastion, funded by the European Research Council.
A shared language for all processors
As more often than not, the software running on a device remains the manufacturer's corporate secret, researchers at the Chair for System Security at Ruhr-Universität Bochum do not analyse the original source code, but the binary code of zeros and ones that they can read directly from a device.
However, different devices are equipped with processors with different complexities: while an Intel processor in a computer understands more than 500 commands, a microcontroller in an electronic key is able to process merely 20 commands. An additional problem is that one and the same instruction, for example "add two numbers," is represented as different sequences of zeros and ones in the binary language of two processor types. This renders an automated analysis of many different devices difficult.
In order to perform processor-independent security analyses, Thorsten Holz' team translates the different binary languages into a so called intermediate language. The researchers have already successfully implemented this approach for three processor types named Intel, ARM and MIPS.
Closing security gaps automatically
The researchers then look for security-critical programming errors on the intermediate language level. They intend to automatically close the gaps thus detected. This does not yet work for any software. However, the team has already demonstrated that the method is sound in principle: in 2015, the IT experts identified a security gap in the Internet Explorer and succeeded in closing it automatically.
The method is expected to be completely processor-independent by the time the project is wrapped up in 2020. Integrating protection mechanisms is supposed to work for many different devices, too.
Helping faster than the manufacturers
"Sometimes, it can take a while until security gaps in a device are noticed and fixed by the manufacturers," says Thorsten Holz. This is where the methods developed by his group can help. They protect users from attacks even if security gaps had not yet been officially closed.

 

 30 

New open source software for high resolution microscopy -- ScienceDaily

Conventional light microscopy can attain only a defined lower resolution limit that is restricted by light diffraction to roughly 1/4 of a micrometre. High resolution fluorescence microscopy makes it possible to obtain images with a resolution markedly below these physical limits. The physicists Stefan Hell, Eric Betzig, and William Moerner were awarded the Nobel Prize in 2014 for developing this important key technology for biomedical research. Currently, one of the ways in which researchers in this domain are trying to attain a better resolution is by using structured illumination. At present, this is one of the most widespread procedures for representing and presenting dynamic processes in living cells. This method achieves a resolution of 100 nanometres with a high frame rate while simultaneously not damaging the specimens during measurement. Such high resolution fluorescence microscopy is also being applied and further developed in the Biomolecular Photonics Group at Bielefeld's Faculty of Physics. For example, it is being used to study the function of the liver or the ways in which the HI virus spreads.
However, scientists cannot use the raw images gained with this method straight away. 'The data obtained with the microscopy method require a very laborious mathematical image reconstruction. Only then do the raw data recorded with the microscope result in a high-resolution image,' explains Professor Dr. Thomas Huser, head of the Biomolecular Photonics Group. Because this stage requires a complicated mathematical procedure that has been accessible for only a few researchers up to now, there was previously no open source software solution that was easily available for all researchers. Huser sees this as a major obstacle to the use and further development of the technology. The software developed in Bielefeld is now filling this gap.
Dr. Marcel Müller from the Biomolecular Photonics Group has managed to produce such universally implementable software. 'Researchers throughout the world are working on building new, faster, and more sensitive microscopes for structured illumination, particularly for the two-dimensional representation of living cells. For the necessary post-processing, they no longer need to develop their own complicated solutions but can use our software directly, and, thanks to its open source availability, they can adjust it to fit their problems,' Müller explains. The software is freely available to the global scientific community as an open source solution, and as soon as its availability was announced, numerous researchers, particularly in Europe and Asia, requested and installed it. 'We have already received a lot of positive feedback,' says Marcel Müller. 'That also reflects how necessary this new development has been.'

 

 31 

RedEye could let your phone see 24-7: Energy-stingy tech could give wearable computers continuous vision -- ScienceDaily

RedEye, new technology from Rice's Efficient Computing Group that was unveiled today at the International Symposium on Computer Architecture (ISCA 2016) conference in Seoul, South Korea, could provide computers with continuous vision -- a first step toward allowing the devices to see what their owners see and keep track of what they need to remember.
"The concept is to allow our computers to assist us by showing them what we see throughout the day," said group leader Lin Zhong, professor of electrical and computer engineering at Rice and the co-author of a new study about RedEye. "It would be like having a personal assistant who can remember someone you met, where you met them, what they told you and other specific information like prices, dates and times. "
Zhong said RedEye is an example of the kind of technology the computing industry is developing for use with wearable, hands-free, always-on devices that are designed to support people in their daily lives. The trend, which is sometimes referred to as "pervasive computing" or "ambient intelligence," centers on technology that can recognize and even anticipate what someone needs and provide it right away.
"The pervasive-computing movement foresees devices that are personal assistants, which help us in big and small ways at almost every moment of our lives," Zhong said. "But a key enabler of this technology is equipping our devices to see what we see and hear what we hear. Smell, taste and touch may come later, but vision and sound will be the initial sensory inputs. "
Zhong said the bottleneck for continuous vision is energy consumption because today's best smartphone cameras, though relatively inexpensive, are battery killers, especially when they are processing real-time video.
Zhong and former Rice graduate student Robert LiKamWa began studying the problem in the summer of 2012 when they worked at Microsoft Research's Mobility and Networking Research Group in Redmond, Wash., in collaboration with group director and Microsoft Distinguished Scientist Victor Bahl. LiKamWa said the team measured the energy profiles of commercially available, off-the-shelf image sensors and determined that existing technology would need to be about 100 times more energy-efficient for continuous vision to become commercially viable. This was the motivation behind LiKamWa's doctoral thesis, which pursues software and hardware support for efficient computer vision.
In an award-winning paper a year later, LiKamWa, Zhong, Bahl and colleagues showed they could improve the power consumption of off-the-shelf image sensors tenfold simply through software optimization.
"RedEye grew from that because we still needed another tenfold improvement in energy efficiency, and we knew we would need to redesign both the hardware and software to achieve that," LiKamWa said.
He said the energy bottleneck was the conversion of images from analog to digital format.
"Real-world signals are analog, and converting them to digital signals is expensive in terms of energy," he said. "There's a physical limit to how much energy savings you can achieve for that conversion. We decided a better option might be to analyze the signals while they were still analog. "
The main drawback of processing analog signals -- and the reason digital conversion is the standard first step for most image-processing systems today -- is that analog signals are inherently noisy, LiKamWa said. To make RedEye attractive to device makers, the team needed to demonstrate that it could reliably interpret analog signals.
"We needed to show that we could tell a cat from a dog, for instance, or a table from a chair," he said.
Rice graduate student Yunhui Hou and undergraduates Mia Polansky and Yuan Gao were also members of the team, which decided to attack the problem using a combination of the latest techniques from machine learning, system architecture and circuit design. In the case of machine learning, RedEye uses a technique called a "convolutional neural network," an algorithmic structure inspired by the organization of the animal visual cortex.
LiKamWa said Hou brought new ideas related to system architecture circuit design based on previous experience working with specialized processors called analog-to-digital converters at Hong Kong University of Science and Technology.
"We bounced ideas off one another regarding architecture and circuit design, and we began to understand the possibilities for doing early processing in order to gather key information in the analog domain," LiKamWa said.
"Conventional systems extract an entire image through the analog-to-digital converter and conduct image processing on the digital file," he said. "If you can shift that processing into the analog domain, then you will have a much smaller data bandwidth that you need to ship through that ADC bottleneck. "
LiKamWa said convolutional neural networks are the state-of-the-art way to perform object recognition, and the combination of these techniques with analog-domain processing presents some unique privacy advantages for RedEye.
"The upshot is that we can recognize objects -- like cats, dogs, keys, phones, computers, faces, etc. -- without actually looking at the image itself," he said. "We're just looking at the analog output from the vision sensor. We have an understanding of what's there without having an actual image. This increases energy efficiency because we can choose to digitize only the images that are worth expending energy to create. It also may help with privacy implications because we can define a set of rules where the system will automatically discard the raw image after it has finished processing. That image would never be recoverable. So, if there are times, places or specific objects a user doesn't want to record -- and doesn't want the system to remember -- we should design mechanisms to ensure that photos of those things are never created in the first place. "
Zhong said research on RedEye is ongoing. He said the team is working on a circuit layout for the RedEye architecture that can be used to test for layout issues, component mismatch, signal crosstalk and other hardware issues. Work is also ongoing to improve performance in low-light environments and other settings with low signal-to-noise ratios, he said.

 

 32 

Nations ranked on their vulnerability to cyberattacks: United States ranked 11th safest of 44 nations studied, highlighting critical vulnerabilities -- ScienceDaily

Data-mining experts from the University of Maryland and Virginia Tech recently co-authored a book that ranked the vulnerability of 44 nations to cyberattacks. Lead author V. S. Subrahmanian discussed this research on Wednesday, March 9 at a panel discussion hosted by the Foundation for Defense of Democracies in Washington, D. C.
The United States ranked 11th safest, while several Scandinavian countries (Denmark, Norway and Finland) ranked the safest. China, India, Russia, Saudi Arabia and South Korea ranked among the most vulnerable.
"Our goal was to characterize how vulnerable different countries were, identify their current cybersecurity policies and determine how those policies might need to change in response to this new information," said Subrahmanian, a UMD professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS).
The book's authors conducted a two-year study that analyzed more than 20 billion automatically generated reports, collected from 4 million machines per year worldwide. The researchers based their rankings, in part, on the number of machines attacked in a given country and the number of times each machine was attacked.
Machines using Symantec anti-virus software automatically generated these reports, but only when a machine's user opted in to provide the data.
Trojans, followed by viruses and worms, posed the principal threats to machines in the United States. However, misleading software (i.e., fake anti-virus programs and disk cleanup utilities) is far more prevalent in the U. S. compared with other nations that have a similar gross domestic product. These results suggest that U. S. efforts to reduce cyberthreats should focus on education to recognize and avoid misleading software.
In a foreword to the book, Isaac Ben-Israel, chair of the Israeli Space Agency and former head of that nation's National Cyber Bureau, wrote: "People--even experts--often have gross misconceptions about the relative vulnerability [to cyber attack] of certain countries. The authors of this book succeed in empirically refuting many of those wrong beliefs. "
The book's findings include economic and educational data gathered by UMD's Center for Digital International Government, for which Subrahmanian serves as director. The researchers integrated all of the data to help shape specific policy recommendations for each of the countries studied, including strategic investments in education, research and public-private partnerships.
Subrahmanian's co-authors on the book are Michael Ovelgönne, a former UMIACS postdoctoral researcher; Tudor Dumitras, an assistant professor of electrical and computer engineering in the Maryland Cybersecurity Center; and B. Aditya Prakash, an assistant professor of computer science at Virginia Tech.
A related research paper on forecasting the spread of malware in 40 countries--containing much of the same data used for the book--was presented at the 9th ACM International Conference of Web Search and Data Mining in February 2016.
Another paper, accepted for publication in the journal ACM Transactions on Intelligent Systems and Technology , looked at the human aspect of cyberattacks--for example, why some people's online behavior makes them more vulnerable to malware that masquerades as legitimate software.
The book, "The Global Cyber Vulnerability Report," V. S. Subrahmanian, Michael Ovelgonne, Tudor Dumitras and B. Aditya Prakash, was published by Springer in December 2015.
The research paper, "Ensemble Models for Data-Driven Prediction of Malware Infections," C. Kang, N. Park, B. A. Prakash, E. Serra, and V. S. Subrahmanian, appears in Proceedings of the 9th ACM International Conf. on Web Science and Data Mining (WSDM 2016), San Francisco, February 2016.
The research paper, "Understanding the Relationship between Human Behavior and Susceptibility to Cyber-Attacks: A Data-Driven Approach," M. Ovelgönne, T. Dumitras, A. Prakash, V. S. Subrahmanian, and B. Wang, was accepted for publication in ACM Transactions on Intelligent Systems & Technology in February 2016.

 

 33 

New singalong software brings sweet melody to any cacophonous cry -- ScienceDaily

"Many people like singing but they lack the skills to do so," says Minghui Dong, the project leader at A*STAR's Institute for Infocomm Research (I2R). "We want to use our technology to help the average person sing well. "
Speech consists of three key elements: content, prosody and timbre. Content is conveyed using words; prosody, or melody in the case of singing, is expressed through rhythm and pitch; and timbre is the distinctive quality that makes a banjo sound different from a trumpet and one singer's voice different from another's. I2R Speech2Singing works by polishing melody while retaining the original content and timbre of a sound.
Existing technologies that focus on correcting melody try to align off-tune sounds to the closest note on the musical scale or to the exact note in the original score. The former works well for professional singers who may be only slightly out of tune but cannot fix those who are singing drastically off-key or simply reading out loud. The latter is better at correcting discordant tunes but ignores many other aspects of melody such as vibrato and vowel stretching.
I2R Speech2Singing uses recordings by professional singers as templates to correct the melody of a singing voice or to convert a speaking voice into a singing one. The software detects the timing of each phonetic sound using speech recognition technology and then stretches or compresses the duration of the signal using voice conversion technology to match the rhythm to that of a professional singer. A speech synthesizer then combines the time-corrected voice with pitch data and background music to produce a beautiful solo.
"When we compared the output with other currently available applications, we realized that our software generated a much better voice quality," says Dr Dong.
Singaporeans were first introduced to the software in 2013 through "Sing for Singapore," part of the official mobile app of National Day Parade 2013. And in 2014, I2R Speech2Singing won the award for best Show & Tell contribution at INTERSPEECH, a major global venue for research on the science and technology of speech communication.
Dr Dong and his team are now developing a solution to quickly add songs into the software so that large-scale song databases can be easily built.

 

 34 

Google glass meets organs-on-chips -- ScienceDaily

Google Glass, one of the newest forms of wearable technology, offers researchers a hands-free and flexible monitoring system. To make Google Glass work for their purposes, Zhang et al. custom developed hardware and software that takes advantage of voice control command ("ok glass") and other features in order to not only monitor but also remotely control their liver- and heart-on-a-chip systems. Using valves remotely activated by the Glass, the team introduced pharmaceutical compounds on liver organoids and collected the results. Their results appear this week in Scientific Reports.
"We believe such a platform has widespread applications in biomedicine, and may be further expanded to health care settings where remote monitoring and control could make things safer and more efficient," said senior author Ali Khademhosseini, PhD, Director of the Biomaterials Innovation Research Center at BWH.
"This may be of particular importance in cases where experimental conditions threaten human life -- such as work involving highly pathogenic bacteria or viruses or radioactive compounds," said leading author, Shrike Zhang, PhD, also of BWH's Biomedical Division.

 

 35 

Detecting hidden malicious ads: Dynamic detection system could protect smartphones from malicious content -- ScienceDaily

"Even reputable apps can lead users to websites hosting malicious content," said Yan Chen, professor of computer science at the Northwestern University McCormick School of Engineering. "No matter what app you use, you are not immune to malicious ads. "
Most people are accustomed to the ads they encounter when interacting with apps on mobile devices. Some pop up between stages in games while others sit quietly in the sidebars. Mostly harmless, ads are a source of income for developers who often offer their apps for free. But as more and more people own smartphones, the number of malicious ads hidden in apps is growing -- tripling in just the past year.
In order to curb attacks from hidden malicious ads, Chen and his team are working to better understand where these ads originate and how they operate. This research has resulted in a dynamic system for Android that detects malicious ads as well as locates and identifies the parties that intentionally or unintentionally allowed them to reach the end user.
Last year, Chen's team used its system to test about one million apps in two months. It found that while the percentage of malicious ads is actually quite small (0.1 percent), the absolute number is still large considering that 2 billion people own smartphones worldwide. Ads that ask the user to download a program are the most dangerous, containing malicious software about 50 percent of the time.
Ad networks could potentially use Chen's system to prevent malicious ads from sneaking into the ad exchange. Ad networks buy space in the app through developers, and then advertisers bid for that space to display their ads. Ad networks use sophisticated algorithms for targeting and inventory management, but there are no tools available to check the safety of each ad.
"It's very hard for the ad networks," Chen said. "They get millions of ads from different sources. Even if they had the resources to check each ad, those ads could change. "
The team will present their research, findings, and detection system on Feb. 22, 2016 at the 2016 Network and Distributed System Security Symposium in San Diego, California.
Chen's work culminated from the exploration of the little-studied interface between mobile apps and the Web. Many in-app advertisements take advantage of this interface: when users click on the advertisement within the app, they are led to an outside web page that hosts malicious content. Whether it is an offer to download fake anti-virus software or fake media players or claim free gifts, the content can take many forms to trick the user into downloading software that gathers sensitive information, sends unauthorized and often charged messages, or displays unwanted ads.
When Chen's detection software runs, it electronically clicks the ads within apps and follows a chain of links to the final landing page. It then downloads that page's code and completes an analysis to determine whether or not it's malicious. It also uses machine-learning techniques to track the evolving behaviors of malware as it attempts to elude detection.
Currently, Chen's team is testing ten-times more ads with the intention of building a more efficient system. He said their goal is to diagnose and detect malicious ads even faster. As people put more and more private information into their phones, attackers are motivated to pump more malicious ads into the market. Chen wants to give ad networks and users the tools to be ready.
"Attackers follow the money," Chen said. "More people are putting their credit card and banking information into their phones for mobile payment options. The smartphone has become a treasure for attackers, so they are investing heavily in compromising them. That means we will see more and more malicious ads and malware. "

 

 36 

Trawling the net to target internet trolls -- ScienceDaily

The software, known as FireAnt (Filter, Identify, Report, and Export Analysis Tool), can speedily download, devour, and discard large collections of online data leaving relevant and important information for further investigation, all at the touch of a button.
Members of the University's Centre for Corpus Approaches to Social Science (CASS) led by Dr Claire Hardaker have produced this cutting-edge tool so that they can pinpoint offenders on busy social networks such as Twitter.
FireAnt was built as part of an international collaboration with corpus linguist and software expert Laurence Anthony, a professor at Waseda University, Japan and honorary research fellow at CASS.
While initially designed to download and handle data from Twitter, FireAnt can analyse texts from almost any online source, including sites such as Facebook and Google+.
"We have developed a software tool designed to enhance the signal and suppress the noise in large datasets," explains Dr Hardaker.
"It will allow the ordinary user to download Twitter data for their own analyses. Once this is collected, FireAnt then becomes an intelligent filter that discards unwanted messages and leaves behind data that can provide all-important answers. The software, which we offer as a free resource for those interested in undertaking linguistic analysis of online data, uses practical filters such as user-name, location, time, and content.
"The filtered information can then be presented as raw data, a time-series graph, a geographical map, or even a visualization of the network interactions. Users don't need to know any programming to use the tool -- everything can be done at the push of a button. "
FireAnt is designed to reduce potentially millions of messages down to a sample that contains only what the user wants to see, such as every tweet containing the word 'British', sent in the middle of the night, from users whose bio contains the word 'patriotic'.
Dr Hardaker, a lecturer in forensic corpus linguistics, began an Economic and Social Research Council-funded project researching abusive behaviour on Twitter in December 2013. The project quickly demonstrated that, while tackling anti-social online behaviour is of key importance, sites like Twitter produce data at such high volumes that simply trying to identify relevant messages amongst all the irrelevant ones is a huge challenge in itself.
Less than a year into the project, Dr Hardaker and her team were invited to Twitter's London headquarters to present project findings to the Crown Prosecution Service and Twitter itself. The research subsequently influenced Twitter to update its policy on abusive online behaviour.
The interest from the Crown Prosecution Service and the police encouraged Dr Hardaker to work with fellow corpus linguist, Professor Laurence Anthony to turn the research into a tool that could both collect online data, and then filter out the 'noise' from millions of messages, thereby enhancing the useful signals that can lead to the identification of accounts, texts, and behaviours of interest.
Dr Hardaker explained that the Government is trying to understand how social networks are involved in issues ranging from child-grooming and human-trafficking to fraud and radicalization. A key aspect of Dr Hardaker's work is a focus on the process of escalation from online messages that may start out as simply unpleasant or annoying, but that intensify to extreme, illegal behaviours that could even turn into physical, offline violence. In this respect, FireAnt can offer the opportunity to pinpoint high-risk individuals and networks that may go on to be a threat, whether to themselves or others.
Dr Claire Hardaker specialises in research into online aggression, manipulation and deception. She is currently working on projects that involve analysing live online social networks for the escalation of abusive behaviour, and the use of the Internet in transnational crime such as human trafficking and modern slavery.
FireAnt is free to download from: http://www.laurenceanthony.net/software/fireant

 

 37 

Machine learning as good as humans' in cancer surveillance, study shows -- ScienceDaily

Every state in the United States requires cancer cases to be reported to statewide cancer registries for disease tracking, identification of at-risk populations, and recognition of unusual trends or clusters. Typically, however, busy health care providers submit cancer reports to equally busy public health departments months into the course of a patient's treatment rather than at the time of initial diagnosis.
This information can be difficult for health officials to interpret, which can further delay health department action, when action is needed. The Regenstrief Institute and IU researchers have demonstrated that machine learning can greatly facilitate the process, by automatically and quickly extracting crucial meaning from plaintext, also known as free-text, pathology reports, and using them for decision-making.
"Towards Better Public Health Reporting Using Existing Off the Shelf Approaches: A Comparison of Alternative Cancer Detection Approaches Using Plaintext Medical Data and Non-dictionary Based Feature Selection" is published in the April 2016 issue of the Journal of Biomedical Informatics .
"We think that its no longer necessary for humans to spend time reviewing text reports to determine if cancer is present or not," said study senior author Shaun Grannis, M. D., M. S., interim director of the Regenstrief Center of Biomedical Informatics. "We have come to the point in time that technology can handle this. A human's time is better spent helping other humans by providing them with better clinical care. "
"A lot of the work that we will be doing in informatics in the next few years will be focused on how we can benefit from machine learning and artificial intelligence. Everything -- physician practices, health care systems, health information exchanges, insurers, as well as public health departments -- are awash in oceans of data. How can we hope to make sense of this deluge of data? Humans can't do it -- but computers can. "
Dr. Grannis, a Regenstrief Institute investigator and an associate professor of family medicine at the IU School of Medicine, is the architect of the Regenstrief syndromic surveillance detector for communicable diseases and led the technical implementation of Indiana's Public Health Emergency Surveillance System -- one of the nation's largest. Studies over the past decade have shown that this system detects outbreaks of communicable diseases seven to nine days earlier and finds four times as many cases as human reporting while providing more complete data.
"What's also interesting is that our efforts show significant potential for use in underserved nations, where a majority of clinical data is collected in the form of unstructured free text," said study first author Suranga N. Kasthurirathne, a doctoral student at School of Informatics and Computing at IUPUI. "Also, in addition to cancer detection, our approach can be adopted for a wide range of other conditions as well. "
The researchers sampled 7,000 free-text pathology reports from over 30 hospitals that participate in the Indiana Health Information Exchange and used open source tools, classification algorithms, and varying feature selection approaches to predict if a report was positive or negative for cancer. The results indicated that a fully automated review yielded results similar or better than those of trained human reviewers, saving both time and money.
"Machine learning can now support ideas and concepts that we have been aware of for decades, such as a basic understanding of medical terms," said Dr. Grannis. "We found that artificial intelligence was as least as accurate as humans in identifying cancer cases from free-text clinical data. For example the computer 'learned' that the word 'sheet' or 'sheets' signified cancer as 'sheet' or 'sheets of cells' are used in pathology reports to indicate malignancy.
"This is not an advance in ideas, it's a major infrastructure advance -- we have the technology, we have the data, we have the software from which we saw accurate, rapid review of vast amounts of data without human oversight or supervision. "

 

 38 

Finding the next new tech material: The computational hunt for the weird and unusual -- ScienceDaily

"It's the weird or unusual structure and behaviors of a material that makes it useful for a technological application," said Ames Laboratory Chief Research Officer Duane Johnson. "So the questions become: How do we find those unusual structures and behaviors? How do we understand exactly how they happen? Better yet, how do we control them so we can use them? "
The answer lies in fully understanding what scientists call solid-to-solid phase transformations, changes of a structure of one solid phase into another under stress, heat, magnetic field, or other fields. School kids learn, for example, that water (liquid phase) transforms when heated to steam (gas phase). But a solid, like a metallic alloy, can have various structures exhibiting order or disorder depending on changes in temperature and pressure, still remain a solid, and display key changes in properties like shape memory, magnetism, or energy conversion.
"Those solid-to-solid transformations are behind a lot of the special features we like and want in materials," explained Johnson, who heads up the project, called Mapping and Manipulating Materials Phase Transformation Pathways. "They are behind things that are already familiar to us, like the expandable stents used in heart surgery and bendable eyeglass frames; but they are also for uses we're still exploring, like energy-harvesting technologies and magnetic cooling. "
The computer codes are an advancement and adaptation of new and existing software, led in development by Johnson. One such code, called MECCA (Multiple-scattering Electronic-structure Code for Complex Alloys), is uniquely designed to tackle the complex problem of analyzing and predicting the atomic structural changes and behaviors of solids as they undergo phase transformations, and reveal why they do what they do to permit its control.
The program will assist and inform other ongoing materials research projects at Ames Laboratory, including ones with experimentalists on the hunt for new magnetic and high-entropy alloys, thermoelectrics, rare-earth magnets, and iron-arsenide superconductors.
"This theoretical method will become a key tool to guide the experimentalists to the compositions most likely to have unique capabilities, and to learn how to manipulate and control them for new applications," Johnson said.

 

 39 

Automatic debugging of software -- ScienceDaily

Computer programs often contain defects, or bugs, that need to be found and repaired. This manual "debugging" usually requires valuable time and resources. To help developers debug more efficiently, automated debugging solutions have been proposed. One approach goes through information available in bug reports. Another goes through information collected by running a set of test cases. Until now, explains David Lo from Singapore Management University's (SMU) School of Information Systems, there has been a "missing link" that prevents these information gathering threads from being combined.
Dr Lo, together with colleagues from SMU, has developed an automated debugging approach called Adaptive Multimodal Bug Localisation (AML). AML gleans debugging hints from both bug reports and test cases, and then performs a statistical analysis to pinpoint program elements that are likely to contain bugs.
"While most past studies only demonstrate the applicability of similar solutions for small programs and 'artificial bugs' [bugs that are intentionally inserted into a program for testing purposes], our approach can automate the debugging process for many real bugs that impact large programs," Dr Lo explains. AML has been successfully evaluated on programs with more than 300,000 lines of code. By automatically identifying buggy code, developers can save time and redirect their debugging effort to designing new software features for clients.
Dr Lo and his colleagues are now planning to contact several industry partners to take AML one step closer toward integration as a software development tool.
Dr Lo's future plans involve developing an Internet-scale software analytics solution. This would involve analysing massive amounts of data that passively exist in countless repositories on the Internet in order to transform manual, pain-staking and error-prone software engineering tasks into automated activities that can be performed efficiently and reliably. This is done, says Dr Lo, by harvesting the wisdom of the masses -- accumulated through years of effort by thousands of software developers -- hidden in these passive, distributed and diversified data sources.

 

 40 

Self-learning arm controlled by thought -- ScienceDaily

According to the developers -- fellows at the Laboratory of Medical Instrument-Making, the Institute of Non-Destructive Testing -- Mikhail Grigoriev, Nikita Turushev and Evgeniy Tarakanets, the manufacturing of human prosthetic limbs has been available for a few decades. But to make them functional, translate them into a full replacement of a lost body part is still impossible.
"To date, there are quite available traction prostheses. Their motions are carried out by means of traction belts which are superimposed from the repaired arm across the back as loop around of the healthy shoulder. That is the prosthesis performs by motions of a healthy arm. The drawbacks of this type are in need of unnatural body motions to control it," said Nikita Turushev.
The algorithm being developed by the polytechnicers will save people from having to wear traction belts. Sensors on the prosthesis will pick up myoelectric signals. Human brain sends signals to muscles making them to perform the necessary actions. The system will analyze commands coming to the healthy arm part and "guess" what motion the prosthesis should do.
"Initially, software will be universal, but we will adapt it to each specific artificial arm. Further, a machine learning algorithm will copy its host wearing the prosthesis: to fix myoelectric signals and choose required motions," says Mikhail Grigoriev.
Now the young scientists are "teaching" the algorithm different signals and their meanings. Initially, they will examine at least 150 people with healthy limbs. "Remembered" the signals and following them meanings the software will produce them at the stage of medical trials.
The polytechnicers gained the grant of the Russian Foundation for Basic Research on the development in 2015. In two years they should present the prosthesis prototype and software for its operation support.

 

 41 

Zip software can detect the quantum-classical boundary: Compression of experimental data reveals the presence of quantum correlations -- ScienceDaily

"We found a new way to see a difference between the quantum universe and a classical one, using nothing more complex than a compression program," says Dagomir Kaszlikowski, a Principal Investigator at the Centre for Quantum Technologies (CQT) at the National University of Singapore.
Kaszlikowski worked with other researchers from CQT and collaborators at the Jagiellonian University and Adam Mickiewicz University in Poland to show that compression software, applied to experimental data, can reveal when a system crosses the boundary of our classical picture of the Universe into the quantum realm. The work is published in the March issue of New Journal of Physics.
In particular, the technique detects evidence of quantum entanglement between two particles. Entangled particles coordinate their behaviour in ways that cannot be explained by signals sent between them or properties decided in advance. This phenomenon has shown up in many experiments already, but the new approach does without an assumption that is usually made in the measurements.
"It may sound trivial to weaken an assumption, but this one is at the core of how we think about quantum physics," says co-author Christian Kurtsiefer at CQT. The relaxed assumption is that particles measured in an experiment are independent and identically distributed -- or i.i.d.
Experiments are typically performed on pairs of entangled particles, such as pairs of photons. Measure one of the light particles and you get results that seems random. The photon may have a 50:50 chance of having a polarization that points up or down, for example. The entanglement shows up when you measure the other photon of the pair: you'll get a matching result.
A mathematical relation known as Bell's theorem shows that quantum physics allows matching results with greater probability than is possible with classical physics. This is what previous experiments have tested. But the theorem is derived for just one pair of particles, whereas scientists must work out the probabilities statistically, by measuring many pairs. The situations are equivalent only as long as each particle-pair is identical and independent of every other one -- the i.i.d. assumption.
With the new technique, the measurements are carried out the same way but the results are analyzed differently. Instead of converting the results into probabilities, the raw data (in the forms of lists of 1s and 0s) is used directly as input into compression software.
Compression algorithms work by identifying patterns in the data and encoding them in a more efficient way. When applied to data from the experiment, they effectively detect the correlations resulting from quantum entanglement.
In the theoretical part of the work, Kaszlikowski and his collaborators worked out a relation akin to Bell's theorem that's based on the 'normalized compression difference' between subsets of the data. If the universe is classical, this quantity must stay less than zero. Quantum physics, they predicted, would allow it to reach 0.24. The theorists teamed up with Kurtsiefer's experimental group to test the idea.
First the team collected data from measurements on thousands of entangled photons. Then they used an open-source compression algorithm known as the Lempel-Ziv-Markov chain algorithm (used in the popular 7-zip archiver) to calculate the normalized compression differences. They find a value exceeding zero -- 0.0494 ± 0.0076 -- proving their system had crossed the classical-quantum boundary. The value is less than the maximum predicted because the compression does not reach the theoretical limit and the quantum states cannot be generated and detected perfectly.
It's not yet clear whether the new technique will find practical applications, but the researchers see their 'algorithmic' approach to the problem fitting into a bigger picture of how to think about physics. They derived their relation by considering correlations between particles produced by an algorithm fed to two computing machines.
"There is a trend to look at physical systems and processes as programs run on a computer made of the constituents of our universe," write the authors. This work presents an "explicit, experimentally testable example. "

 

 42 

No need for supercomputers: Russian scientists suggest a PC to solve complex problems tens of times faster than with massive supercomputers -- ScienceDaily

Senior researchers Vladimir Pomerantcev and Olga Rubtsova, working under the guidance of Professor Vladimir Kukulin (SINP MSU), were able to use on an ordinary desktop PC with GPU to solve complicated integral equations of quantum mechanics -- previously solved only with the powerful, expensive supercomputers. According to Vladimir Kukulin, the personal computer does the job much faster: in 15 minutes it is doing the work requiring normally 2-3 days of the supercomputer time.
The equations in question were formulated in the '60s by the Russian mathematician Ludwig Faddeev. The equations describe the scattering of a few quantum particles, i.e., represent a quantum mechanical analog of the Newtonian theory of the three body systems. As the result, the whole field of quantum mechanics called "physics of few-body systems" appeared soon after this.
This area poses a great interest to scientists engaged in quantum mechanics, nuclear and atomic physics and the theory of scattering. For several decades after the pioneering work of Faddeev one of their main purposes was to learn to solve these complicated equations. However, due to the incredible complexity of the calculations in the case of fully realistic interactions between a system's particles stood out of the researchers' reach for a long time, until the supercomputers appeared.
The situation changed dramatically after the group of SINP decided to use one of the new Nvidia GPs designed for use in game consoles on their personal computer. According to one of the authors Vladimir Kukulin, Head of Laboratory of Nuclear Theory, the processor was not the most expensive, of those that you can buy for $300-500.
The main problem in solving the scattering equations of multiple quantum particles was the calculation of the integral kernel -- a huge two-dimensional table, consisting of tens or hundreds of thousands of rows and columns, with each element of such a huge matrix being the result of extremely complex calculations. But this table appeared to look like a monitor screen with tens of billions of pixels, and with a good GPU it was quite possible to calculate all of these. Using the software developed in Nvidia and having written their own programs, the researchers split their calculations on the many thousands of streams and were able to solve the problem brilliantly.
"We reached the speed we couldn't even dream of," Vladimir Kukulin said. "The program computes 260 million of complex double integrals on a desktop computer within three seconds only. No comparison with supercomputers! My colleague from the University of Bochum in Germany (recently deceased, mournfully), whose lab did the same, carried out the calculations by one of the largest supercomputers in Germany with the famous blue gene architecture that is actually very expensive. And what his group is seeking for two or three days, we do in 15 minutes without spending a dime. "
The most amazing thing is that the desired quality of graphics processors and a huge amount of software to them exist for ten years already, but no one used them for such calculations, preferring supercomputers. Anyway, our physicists surprised their Western counterparts pretty much.
"This work, in our opinion, opens up completely new ways to analyze nuclear and resonance chemical reactions," says Vladimir Kukulin. "It can also be very useful for solving a large number of computing tasks in plasma physics, electrodynamics, geophysics, medicine and many other areas of science. We want to organize a kind of training course, where researchers from various scientific areas of peripheral universities that do not have access to supercomputers could learn to do on their PCs the same thing that we do. "

 

 43 

Google Project Bloks Tangible Programming For Kids

Google Research is working on a new initiative to introduce kids to computing in ab entirely hands-on, physical way. A prototype has been produced to show how it the tangible programming approach combines the way children innately play with learning with computational thinking.
As explained in this introductory video Project Bloks is a research project with the aim of creating an open hardware platform to help developers, designers, and researchers build the next generation of tangible programming experiences for kids.
The project is a collaboration between Google Creative Lab, design consulting firm IDEO and Paulo Bilkstein, Assistant Professor of Education at Stanford University.
In the video Bilkstein refers to the long history of tangible programming stretching back to Seymour Papert in the 1970s. The Google Research blog post announcing Project Bloks goes further back and says that it:
is preceded and shaped by a long history of educational theory and research in the area of hands-on learning. From Friedrich Froebel, Maria Montessori and Jean Piaget’s pioneering work in the area of learning by experience, exploration and manipulation, to the research started in the 1970s by Seymour Papert and Radia Perlman with LOGO and TORTIS.
Block is intended to make coding a fun activity for young children by putting it in the context of collaborative play and introducing interactivity with the real world, for example switching light bulbs on and off. Unlike Scratch Block or Blockly, where the drag and drop blocks are snippets of software, in this case they are physical control modules, called Pucks, that provide signals to go, stop, turn on and off, etc.
The main control interface, the Brain Board is built on a Raspberry Pi Zero module. It is the communication interface with the other components as well as providing power, Wi-Fi and Bluetooth connectivity. The Base Boards are connectible blocks, onto which Pucks can be placed. They are modular and can be connected in sequence and in different orientations to create different programming flows and experiences They also provide both haptic and LED feedback to the user when that control is activated and can send audio feedback to the Brain Board..
When a Puck is placed onto a Base Board it is then connected directly, or via another base board, to the Brain Board and sends that specific command back to the software.
Pucks are what make the Project Bloks system so versatile. They help bring the infinite flexibility of software programming commands to tangible programming experiences. Pucks can be programmed with different instructions, such as 'turn on or off', 'move left' or 'jump'. They can also take the shape of many different interactive forms—like switches, dials or buttons. With no active electronic components, they're also incredibly cheap and easy to make. At a minimum, all you'd need to make a puck is a piece of paper and some conductive ink.
Development on the project began in 2013, and it's being unveiled now so that Google can start gauging developer interest and finding partners who want to use the platform to build toys and educational products with it.
To show how designers, developers, and researchers might make use of the system, the Project Bloks team has created a reference device, called the Coding Kit. This lets kids learn basic concepts of programming by allowing them to put code bricks together to create a set of instructions that can be sent to control connected toys and devices - including the drawing robot shown in this video:
As this video shows, the motivation for Project Bloks is educational. The team's position paper, Project Bloks: designing a development platform for tangible programming for children concludes:
Research and design for children is our passion. Designing the Project Bloks system was, above all, an exercise to demonstrate how much children can accomplish with the right tools, how much they can learn when they are not told what to do, and how much reward exploration can bring them.
The vision of Seymour Papert 50 years ago was a powerful one: children will program the computer. It won’t be the other way around.
Project Bloks: designing a development platform for tangible programming for children (pdf)  Paulo Blikstein (Stanford University), Arnan Sipitakiat (Chiang Mai University, Thailand), Jayme Goldstein (Google), João Wilbert (Google), Maggie Johnson (Google), Steve Vranakis (Google), Zebedee Pedersen (Google), Will Carey (IDEO).
Scratch not to be sniffed at!
Teaching Coding To The Next Generation
To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, Facebook, Google+ or Linkedin.

 

 44 

Microsoft is adding wheelchair options for Xbox avatars

Many Xbox gamers spend a huge amount of time carefully crafting their avatars, adjusting each detail meticulously to ensure that their digital representation is as close as possible to how they appear in the real world.
Some avatar elements can't be customized, though - but one very notable omission will soon be rectified. As things currently stand, there's no way to customize an avatar to include a wheelchair - and for many wheelchair users who are also passionate Xbox gamers, that's obviously disappointing.
As WinBeta spotted, one gamer mentioned the possibility of adding wheelchairs to avatar customization options in a tweet to Xbox chief Phil Spencer:
After another Twitter user suggested that it might be worth starting a petition to raise awareness of the issue, Spencer replied that no petition was necessary, adding: "We hear you. This is something that we've already looked at, not far off. "
There's no indication yet of exactly when gamers will be able to add a wheelchair to their avatars, but the news will no doubt be welcomed by many.
Source: @XboxP3 via WinBeta

 

 45 

Windows 10 Anniversary Update, Salesforce On Outlook: Microsoft Roundup

Mark your calendars, Windows 10 users: Microsoft is releasing its Windows 10 Anniversary Update on Aug. 2, 2016.
This is one of the biggest updates to arrive on Windows 10 since it was released to the general public on July 29, 2015. Since then, Microsoft reports its newest OS is running on 350 million devices and that users have spent more than 135 billion hours on it.
New features arriving in the Anniversary Update are designed to improve security, digital inking capabilities, Cortana, and the Edge browser. Microsoft's personal digital assistant will now be available above the lock screen. New Edge extensions will include AdBlock and LastPass.
The Anniversary Update is arriving a few days after the official one-year mark for Windows 10, but Microsoft still plans to terminate its free upgrade offer on July 29. If you haven't downloaded Windows 10, you have a few weeks to upgrade and get the Anniversary Update features for free.
[Microsoft CEO Satya Nadella: 6 must-have principles for AI design .]
Of course, not everyone wants to upgrade to Windows 10. Microsoft this week paid out $10,000 to a customer after an unauthorized upgrade caused her business computer to slow down, crash, and become unusable for days at a time.
The customer, Teri Goldstein, owns a travel agency business in Sausalito, Calif. When attempts to contact Microsoft support proved unsuccessful, she sued the company for the cost of wages lost and a new computer. She won her case.
This occurrence exemplifies the aggressive upgrade strategy Microsoft has adopted to increase the Windows 10 user base. Following user complaints, Microsoft changed its upgrade prompt for Windows 10 to be less aggressive, reported The Verge.
In order to drive more upgrades, Microsoft recently changed its upgrade prompt so that if users tried to dismiss it by clicking the red X, the update would be scheduled anyway. The UI change was confusing, so Microsoft is now giving users the option to "decline free offer," and if they click the X, the upgrade won't download.
Microsoft this week confirmed plans to kill the Surface 3 in December 2016. The news was originally reported by Thurrott.com , which noticed multiple editions of the budget hybrid were missing from Microsoft's online store.
The Surface 3 was introduced in March 2015 as a more affordable version of the Surface Pro 3; given its age, a spike in popularity was unlikely. Microsoft later confirmed inventory is limited and production will end by December.
While a Surface 4 would seem a natural next step for Redmond's hardware portfolio, neither officials nor Microsoft's rumor mill have discussed plans to launch an upgraded model. There's a chance Microsoft will exit the budget device space altogether, given its recent efforts to bring its premium Surface Pro 4 and Surface Book to enterprise customers.
Salesforce launched a new integration with Microsoft Outlook to accelerate productivity for business users, specifically sales reps. A new add-in called Lightning for Outlook enables users to access Salesforce data from the Outlook inbox without the need to go back and forth between apps.
Microsoft and Salesforce have a strategic partnership that has resulted in integrations throughout the Office suite. However, this latest announcement marks the first time Lightning components are available in another app, said Salesforce Sales Cloud director Greg Gsell.
Office updates include new functionality for Sway in Office 365. Subscribers now have access to three new features, including the abilities to add passwords, add more content (like text, photos, and videos), and remove the informational footer for a more polished appearance.
Windows Insiders in the Fast ring received Windows 10 preview build 14376 for PC and mobile. This build does not include any major new features, but it packs several improvements for both types of devices. As with all early builds, there are issues to be aware of before downloading.
Dona Sarkar, software engineer and new head of the Windows Insider program, announced the "Messaging everywhere" feature will not be arriving as part of the Windows 10 Anniversary Update.
This feature would allow users to send and receive texts from a Windows 10 phone to a Windows 10 PC. It was well-received by Insiders during testing, but Microsoft believes it can create a better experience through the Skype app and has decided to withhold it from launch.

 

 46 

Umi Touch Review: All specs, no polish

You probably haven’t heard of the company Umi. They’re a relatively unknown Chinese manufacturer with a mission to make affordable devices with compelling feature sets. And they’re not alone in pursuing that formula: there are tons of companies throughout Asia trying their hardest to grab a share of the entry-level market.
OnePlus, another Chinese startup founded by former Oppo employees, succeeded in competing with other electronics giants by selling well rounded and affordable Android phones, however they tend to be the exception rather than the rule. Historically I've taken issue with smaller OEMs and the devices they produce as they rarely live up to their advertised claims, making quality a big hit or a huge miss seemingly at random.
However, I am willing to give any company a chance to impress, which is why I have the Umi Touch in the office to review. Priced at $160, the Touch carries a respectable list of specs including a 5.5-inch 1080p display, 13-megapixel Sony IMX328 camera, a huge 4,000 mAh battery, a metal design, and even a fingerprint sensor. It’s running the latest version of Android as well, without much bloatware or customization.
So can the Umi Touch break away from the norm and actually present a good package from a lesser known Chinese manufacturer? I’ve spent more than a month with the Touch to find out.
The design of the Umi Touch is nothing too fancy. The phone is a pretty typical rounded-rectangle with glass on the front, and metal on the back that curves around each long edge. This metal is flanked by plastic along the top and bottom of the rear, finished to look similar to the metal plate.
This method of disguising plastic doesn’t often work from a visual standpoint – the difference in materials is obvious – and is something not seen on premium handsets. Of course at this price point, the Umi Touch isn’t a premium device, so I can forgive the company for opting to design it in this way.
I certainly prefer the metal back to the plastic build budget devices like the Moto G employ, and in general this 5.5-inch device is comfortable to hold thanks to decent curvature along each edge. However, there’s no mistaking this metal design for the best on the market; the HTC 10 and Nexus 6P , for example, are still several steps ahead in terms of visual appeal.
There are a couple of design issues with the Umi Touch that expose it as a cheap handset. The seams between the metal and plastic on the rear are very noticeable and not particularly even, which is something a high-end phone manufacturer wouldn’t tolerate. The front panel also lacks symmetry in design: the front camera is at a different height to the front flash, and the home button below the display is very slightly askew in the unit I received to review.
Most damning is the difference between the press renders of the Umi Touch’s front panel and the real model. The renders appear to show a large display with very little bezel at the edges, but in reality the bezels are considerably larger and quite noticeable. I don’t like this sort of deception at all, as the renders portray a design that’s significantly more attractive than the actual device. Buyers expecting a sleek, bezel free design could be seriously disappointed when their Touch arrives in the mail.
The Umi Touch (left) is almost as large as the Google Nexus 6P (right)
The bezel size makes this 5.5-inch phone larger than average: it’s bigger than the Galaxy S7 Edge by a decent margin, and almost the same size as my 5.7-inch Nexus 6P. At 8.5mm and nearly 200 grams, it’s a thicker and heavier device too. The weight in particular is very noticeable, as it helps to make the Touch feel dense for a phone of this class, probably due to the 4,000 mAh battery inside.
The glass panel on the front is both good and bad. I like the way the glass curves away to the polished metal rim, creating a “2.5D” edgeless feel while swiping across it. However, the coating Umi has used isn’t the same quality as most other smartphones, which makes it a bigger fingerprint magnet and reduces the swooshability. Swiping across the display has more resistance than the 2015 Moto G , for example.
Below the display is a fingerprint sensor; something seldom seen at this price point. The sensor is touch-activated, like most fingerprint sensors found in smartphones, and it works far better than I was expecting. It’s similar in speed to a Nexus 6P and only slightly less accurate, plus it doubles as a capacitive touch home button. Kudos to Umi for getting this key feature of their smartphone working well.
Umi has designed the other capacitive navigation buttons, found to the left and right of the fingerprint sensor, in a similar fashion to OnePlus. There are no logos on the buttons, which can be a little confusing before you realize the right button is back, and the left button is menu. I’d have far preferred to see the back button on the left, and an app switcher button on the right, as opposed to the legacy and generally unnecessary menu button. A software option to change this, similar to OnePlus, would be much appreciated.
The power button and volume rocker are found on the right side in a comfortable position, and both exhibit a decent clicky response. On the bottom is a standard micro-USB port, while the 3.5mm audio jack is on the top. The left edge features a single tray with two slots: one for a micro-SIM, and another for either a second micro-SIM or a microSD card depending on your needs.
There is a single speaker on the Umi Touch, located at the bottom of the rear panel. There’s no front facing audio here, so you might need to cup your hands around the speaker to get a decent audio experience. Quality is very average as you’d imagine, and it’s not particularly loud either.

Total 46 articles. Generated at 2016-07-05 06:00