Announcement

75 articles, 2016-07-05 18:00
(1.02/2)  1  Google's DeepMind AI will identify ocular ailments at Moorfield Eye Hospital

Partnership will create a machine learning system to spot early occurrences of eye problems,Software,Databases ,Artificial Intelligence,NHS,Analytics,Big Data 2016-07-05 10:53 2KB www.v3.co.uk

(1.02/2)  2  China to restrict online news sites from using social media as source

A state media agency in China has announced that it will start cracking down on news websites that use social media as a source for reports without properly verifying them first, to combat false news. 2016-07-05 10:50 1KB feedproxy.google.com

(0.02/2)  3  Advanced Concepts of Java Object Serialization

Probe serialization and its related concepts and learn to delineate some of its nooks and crannies, along with their implementation in the Java API. 2016-07-05 00:00 7KB www.developer.com

(0.01/2)  4  Security Think Tank: Biometrics have key role in multi-factor security

How can organisations move to biometric authentication of users without running the risk of exposing sensitive biometric information? 2016-07-05 13:49 1KB www.computerweekly.com

(0.01/2)  5  Windows 10 Anniversary Update Slated For Aug. 2

Microsoft plans to mark the first year of its latest OS with several new features in a Windows 10 Anniversary Update. 2016-07-05 14:05 4KB www.informationweek.com

(0.01/2)  6  Understanding Gradle, the Android Build System

Review the basics of Gradle, the Android Build System. 2016-07-05 00:00 2KB www.developer.com

(0.01/2)  7  Using the Executor Framework to Deal with Java Threads

Examine the Java core framework and its uses with a little background idea to begin with. 2016-07-05 00:00 6KB www.developer.com

 8  Amazon Announces Immediate Availability of Asia Pacific (Mumbai) Region

On June 27th, Amazon announced the immediate availability of their 6th AWS Region in Asia Pacific. This region is in Mumbai, India and it joins other regions in Asia Pacific including Beijing, Seoul, Singapore, Sydney, and Tokyo. With the addition of Mumbai... 2016-07-05 16:02 4KB www.infoq.com

 9  . NET Core 1.0 Released

Microsoft has formally released version 1.0 of. NET Core, the freely available and open source version of. NET. This provides developers a multiplatform way to target Windows, Linux, and Mac OS X systems with a single codebase. 2016-07-05 16:00 2KB www.infoq.com

 10  Evolving Glasgow’s Future City

It has been two years since Glasgow was awarded £24m under the Future City demonstrator programme. We find out how it has evolved. 2016-07-05 13:49 2KB www.computerweekly.com

 11  CW@50: The story of the internet, and how it changed the world

Computer Weekly’s journey through 50 years of innovation in technology continues with a look back at the history of the internet and the huge changes it has brought to society. 2016-07-05 13:49 2KB www.computerweekly.com

 12  European omni-channels: Hype or reality?

Organisations in Europe are adopting to demands for omni-channel services from consumers across the continent 2016-07-05 13:49 2KB www.computerweekly.com

 13  Microsoft Streamlines Visual Studio Installation

Microsoft is refactoring its Visual Studio installation to be smaller, faster, more reliable and easier to manage. 2016-07-05 13:49 5KB www.eweek.com

 14  Eclipse Foundation Ships Neon Release Train

The Eclipse Foundation shipped its eleventh annual release train, featuring 84 projects and 69 million lines of code from nearly 800 developers. 2016-07-05 13:49 4KB www.eweek.com

 15  Twilio IPO May Be Key Indicator for Other Unicorns in 2016

NEWS ANALYSIS: A good response from investors June 23 could help determine whether companies such as Dropbox, Uber and others decide to test the waters this year. 2016-07-05 13:49 4KB www.eweek.com

 16  Google Seeks to Spur Kids' Interest in Coding With Project Bloks

A Google Research project seeks to build on years of theory and research in the area of tangible programming to interest children in programming at an early age. 2016-07-05 13:49 4KB www.eweek.com

 17  Chan Zuckerberg Initiative Selects Andela for First Major Investment

Andela, a company that pairs developers in Africa with opportunity in the U. S., has been selected as the first major investment of the Chan Zuckerberg Initiative. 2016-07-05 13:49 4KB www.eweek.com

 18  Ruby On Rails Reaches 5.0

Programming book reviews, programming tutorials,programming news, C#, Ruby, Python,C, C++, PHP, Visual Basic, Computer book reviews, computer history, programming history, joomla, theory, spreadsheets and more. 2016-07-05 13:48 2KB www.i-programmer.info

 19  Enterprises: Tear Down Your Engineering Silos

Will Murrell, a senior network engineer with UNICOM systems, knows a thing or two about silos. UNICOM develops a variety of software and other tools to work with IBM's mainframe, Microsoft Windows, and Linux. Murrell recently talked with InformationWeek senior editor Sara Peters... 2016-07-05 13:48 1KB www.informationweek.com

 20  Codenvy's Language Server Protocol Reduces Programmer Envy

Codenvy, Red Hat and Microsoft collaborate on new language protocol for developers to integrate programming languages across code editors and IDEs. 2016-07-05 13:49 5KB www.eweek.com

 21  IBM Adds New Bluemix OpenWhisk Tools for IoT Development

IBM added new tools for its Bluemix OpenWhisk serverless computing platform that utilizes Docker. OpenWhisk also features user interface updates. 2016-07-05 13:49 3KB www.eweek.com

 22  Eclipse Updates Four IoT Projects, Launches a New One

The Eclipse Foundation announced new releases of four open-source IoT projects to accelerate IoT solution development. 2016-07-05 13:49 3KB www.eweek.com

 23  Tesla Autopilot Crash Under NHTSA Investigation

The National Highway Traffic Safety Administration is looking into the circumstances surrounding a fatal accident involving a Tesla being driven under autopilot. 2016-07-05 12:06 4KB www.informationweek.com

 24  HTC continues bitter struggle; HTC 10 flagship on course to sell only 1m units all year

HTC is continuing to struggle as its smartphone division fails to capture the market. A new report says the company's critically acclaimed flagship will only sell one million units this year. 2016-07-05 11:20 2KB feedproxy.google.com

 25  LG Expands the X Series with the LG X5 and LG X Skin

LG just announced two new smartphones today 2016-07-05 11:12 2KB news.softpedia.com

 26  Microsoft has permanently cut the price of its Surface 3 Docking Station in the UK by 40%

Microsoft slashed 40% off the Docking Station in April in a 'limited time offer' due to end on June 30. But it's now extended that discount by a further six months - effectively a permanent price cut. 2016-07-05 11:06 1KB feedproxy.google.com

 27  Data Generation Gap: Younger IT Workers Believe The Hype

There's a growing generation gap when it comes the promise of revenues from data-driven projects. Where younger workers see the future, older workers may only see another cycle of tech hype. 2016-07-05 11:06 3KB www.informationweek.com

 28  IBM Opens Blockchain-Oriented, Bluemix Garage In NYC

This week, IBM added a seventh garage for developers. Big Blue is opening a BlueMix Garage in New York City that will focus on financial services, including the use of blockchain technology. 2016-07-05 11:05 4KB www.informationweek.com

 29  Hortonworks Commits To Microsoft's Azure Cloud

Hadoop distributor Hortonworks announced a deeper partnership with cloud giant Microsoft, a new consortium to create an open source genomics project for precision medicine, and new enterprise features in its Hortonworks Data Platform update at this week's Hadoop Summit. 2016-07-05 11:05 4KB www.informationweek.com

 30  Microsoft to introduce more flexible Enterprise Advantage licensing in 2017

Microsoft moves to let customers mix on-premise and cloud tools in their volume licenses,Licensing,Business Software ,Microsoft,Azure 2016-07-05 18:00 3KB www.computing.co.uk

 31  MIT Develops New "Swarm" Multi-Core CPU Architecture for Higher Speeds

New architecture improves parallel computing 2016-07-05 10:50 2KB news.softpedia.com

 32  More than 2,000 police data breaches in 4.5 years, report reveals

Big Brother Watch has called for new policies to ensure police forces keep personal data safe after FOI requests show an average of 10 data breaches a week 2016-07-05 10:45 2KB www.computerweekly.com

 33  BleachBit 1.12 Free System Cleaner Brings Support for Ubuntu 16.04, Fedora 24

Available now for GNU/Linux and Microsoft Windows systems 2016-07-05 10:41 2KB news.softpedia.com

 34  'Sneak peek' at Xbox avatars with wheelchairs hints at wider avatar upgrades on the way

Microsoft's Mike Ybarra has provided a 'sneak peek' at how Xbox avatars will look with a new wheelchair option - and the images suggest that more detailed avatars may be on the way for everyone. 2016-07-05 10:06 1KB feedproxy.google.com

 35  Uber App Update To Track Driver Behavior

The update is designed to capture data about how Uber's drivers operate their vehicles -- measuring braking, acceleration, and speed. 2016-07-05 10:06 4KB www.informationweek.com

 36  Industrialised cyber crime disrupting business, report reveals

A majority of businesses do not comprehend the methods and motivations of cyber attackers or fully understand the scale of the threat, a BT-KPMG report has revealed 2016-07-05 10:00 2KB www.computerweekly.com

 37  Defender OS Rebased on Fedora 24, Gets Cinnamon 3.0.6 & Linux Kernel 4.6.3

The distro also offers a version based on the Mageia 5 Linux 2016-07-05 09:59 2KB news.softpedia.com

 38  Samsung Expecting Best Quarter in Two Years Thanks to Galaxy S7 Sales

Samsung might report 13% increase in operating profit 2016-07-05 09:46 2KB news.softpedia.com

 39  Debian 8 Gets New Kernel Update, Five Vulnerabilities and a Regression Patched

Users are urged to update their system as soon as possible 2016-07-05 09:29 2KB news.softpedia.com

 40  Monitor your CPU temperature with Core Temp

Core Temp is a powerful CPU temperature monitor which has been helping users watch their hardware since 2006. Core Temp is a powerful CPU temperature monitor which has been helping users watch their hardware since 2006. The project seemed to have faded away in the past few years... 2016-07-05 08:52 1KB feeds.betanews.com

 41  Identity fraud in UK targets under 30s

New figures reveal a 52 percent rise in young identity fraud victims in the UK. In 2015, just under 24,000 people aged 30 and under were victims of identity fraud. This is up from 15,766 in 2014, and more than double the 11,000... 2016-07-05 08:51 2KB feeds.betanews.com

 42  Samsung Galaxy J2 (2016) with Smart Glow Notification Ring Leaks in Image

The notification support is placed around the rear camera 2016-07-05 08:31 2KB news.softpedia.com

 43  Nokia ends the smartphone beta test

Over four years ago, Nokia claimed that the smartphone beta test was over when it launched the Lumia 900. Let's find out if the company delivered, or if it was all hype for Windows Phone 7.5. 2016-07-05 08:30 7KB feedproxy.google.com

 44  Video conferencing increases productivity

Video collaboration increases productivity and improves both business and personal relationships, according to video conferencing technology company Lifesize. Video collaboration increases productivity and improves both business and personal relationships, according to video conferencing technology company Lifesize. The company polled its users and says … 2016-07-05 08:21 2KB feeds.betanews.com

 45  The Xiaomi Mi Band 2 is the most disappointing wearable of the year so far

Xiaomi has recently released the latest version of their Mi Band fitness tracker range, and while it has a number of new features, it also disappoints in many areas. Check out our full review! 2016-07-05 08:14 7KB feedproxy.google.com

 46  The New IT: Driving Business Innovation With Tech

Andi Mann, chief technology advocate at Splunk, sees major changes afoot in how IT and business are aligning. Here, he shares experiences with CIOs and other IT leaders as they work on developing strategies to derive real business outcomes from the technology they use every day. 2016-07-05 08:06 6KB www.informationweek.com

 47  Research: Exploring the connection between DevOps and digital

Is adopting DevOps a prerequisite for going digital, or a by-product of the digitalisation process? ,Careers and Skills,Developer ,DevOps,DevOps Summit 2016-07-05 18:00 817Bytes www.computing.co.uk

 48  Agile Vs. DevOps: 10 Ways They're Different

DevOps and Agile are broad terms but they aren't synonyms. Here are the ways in which they're different -- and why those differences matter to your team. 2016-07-05 07:06 2KB www.informationweek.com

 49  10 Hot Smartphones To Consider Now

Although smartphone sales have been on the decline recently, there is no shortage of options. Here are 10 hot models worth a look. 2016-07-05 07:06 3KB www.informationweek.com

 50  Chromecast functionality arrives in Chrome 51

With the Chrome 51, has built in Casting functionality directly into the browser. Those with the Cast extension can keep it as a quick way to cast web pages to their Chromecast devices too. 2016-07-05 06:38 2KB feedproxy.google.com

 51  Intel code leaves systems vulnerable to attacks; flaw used to bypass all Windows security

Old Intel code still present in UEFI firmware used by many machines has left devices open to attacks. Lenovo admits that the Secure Boot-disabling vulnerability's scope of impact is industry wide. 2016-07-05 06:22 2KB feedproxy.google.com

 52  Twitter estimates that it has 10 million users in China

A source inside Twitter told TechCrunch that the company estimates that it has around 10 million users in China. That gives us a glimpse at the potential.. 2016-07-05 00:00 4KB feedproxy.google.com

 53  The user guide to early stage fundraising

There are now myriad financing options that founders can consider as they look to build their companies. Nearly 70,000 companies received funding through.. 2016-07-05 00:00 5KB feedproxy.google.com

 54  Silent Circle silently snuffs out its warrant canary — but claims it’s a “business decision”

Silent Circle, the maker of encrypted messaging apps and a security hardened Android smartphone, called Blackphone, has discontinued its warrant canary. 2016-07-05 00:00 6KB feedproxy.google.com

 55  Apple urges organ donation via new iPhone software

SAN FRANCISCO— Apple wants to encourage millions of iPhone owners to register as organ donors through a software update that will add an easy sign-up button to the health information app that comes installed on every smartphone the company makes. CEO Tim Cook says he hopes the new... 2016-07-05 03:02 1KB www.cnbc.com

 56  Enterprise NPM users to get help with security, licensing

Third parties are being enlisted to provide add-on services for JavaScript modules 2016-07-05 03:00 2KB www.infoworld.com

 57  Mogees Play turns any surface into a music and gaming device

The Mogees Play is the latest product from London-based startup Mogees. Based on the same contact microphone and machine-learning technology first seem in the.. 2016-07-05 00:00 4KB feedproxy.google.com

 58  Finally, a service that tests and ranks the best VPNs for China

Circumvention Central is a resource that tests the speed and reliability of VPNs on actual websites not just servers, and on an ongoing basis. The result is.. 2016-07-05 00:00 4KB feedproxy.google.com

 59  Wi-Fi sharing community Instabridge picks up backing from Draper Associates

Swedish startup Instabridge, a Wi-Fi sharing community and mobile app, has picked up $1 million in new funding. Noteworthy for a European startup is that.. 2016-07-05 00:00 2KB feedproxy.google.com

 60  Four Things Your Business Does That Seems Outdated to Programmers

Attracting, hiring, and keeping good employees will be easier if you follow these practices. 2016-07-05 00:00 5KB www.developer.com

 61  A Deeper Look: Java Thread Example

Become more familiar with some concepts that would aid in better understanding Java threads, eventually leading to better programming. 2016-07-05 00:00 9KB www.developer.com

 62  Top 10 Reasons to Get Started with React. JS

Study some reasons why you should choose the React. JS framework for your next project. 2016-07-05 00:00 7KB www.developer.com

 63  Stream Operations Supported by the Java Streams API

Take on the concept of streams from a comparative perspective; we'll illustrate some of its usage in regular Java programming. 2016-07-05 00:00 6KB www.developer.com

 64  Exploring the Java String Tokenizer

Gain a comprehensive understanding of the background concepts of tokenization and its implementation in Java. 2016-07-05 00:00 5KB www.developer.com

 65  Streamline Your Understanding of the Java I/O Stream

Learn to streamline your understanding of I/O streams APIs in Java. 2016-07-05 00:00 9KB www.developer.com

 66  Testing Controllers in Laravel with the Service Container

Learn how the Laravel controller and Service Container work together and how to leverage the container for testing purposes. 2016-07-05 00:00 6KB www.developer.com

 67  The Top Ten Ways to Be a Great ScrumMaster

Do you lead an Agile team? Here are tips to be more productive. 2016-07-05 00:00 3KB www.developer.com

 68  Serverless Architectures on AWS: Monitoring Costs

Monitoring your costs is always a big concern. Become better equipped to do so. 2016-07-05 00:00 7KB www.developer.com

 69  Tips for MongoDB WiredTiger Performance Tuning

Learn about some of the parameters you can tune to optimize the performance of WiredTiger on your server. 2016-07-05 00:00 4KB www.developer.com

 70  The Value of Doing APIs Right: A Look at the SiriKit API Demoware

Siri is getting its own API, and this might open up new visas for its use. 2016-07-05 00:00 6KB www.developer.com

 71  What Is Jenkins?

Leap into Jenkins, an open source project written in Java and dedicated to sustaining continuous integration practices. 2016-07-05 00:00 15KB www.developer.com

 72  Cross-field Validation in JSF

Study a brief overview of three approaches for achieving cross-field validation using JSF core and external libraries. 2016-07-05 00:00 7KB www.developer.com

 73  15 Amazing Mobile Apps for Aspiring Designers

Harness the power of technology to create new apps that will captivate your users. 2016-07-05 00:00 10KB www.developer.com

 74  Elastic Leadership: Review the Code

Team leaders should influence the team in the right direction by changing environmental forces. But getting the team leaders to do this pushing might lead to environmental forces in the first place. 2016-07-05 00:00 6KB www.developer.com

 75  John Lewis CIO Paul Coby promoted to uber-CIO of John Lewis Partnership

Paul Coby to oversee IT of both John Lewis and Waitrose,Cloud and Infrastructure,Business Software ,John Lewis,Paul Coby,Waitrose,Oracle 2016-07-05 18:00 2KB www.computing.co.uk

Articles

75 articles, 2016-07-05 18:00

 

 1 

Google's DeepMind AI will identify ocular ailments at Moorfield Eye Hospital (1.02/2)

Google’s DeepMind artificial intelligence (AI) division has partnered with Moorfield Eye Hospital in London to create a machine learning system that can identify conditions that may threaten eyesight. 
Moorfields will use the system to run algorithms on top of one million anonymous digital eye scans as a way to spot the early signs of conditions that may be missed by medical experts.
The DeepMind technology could also spot hidden problems that could result in harm to a patient’s sight during long waiting times for specialist treatment.
“This is where DeepMind is able to help us understand these huge datasets and then put it together so that it benefits towards making a good diagnosis and achieving the best possible treatment for our patients,” said Professor Sir Peng Tee Khaw, head of Moorfields’ ophthalmology research centre.
The data being accessed by DeepMind will be kept anonymous to preserve patient privacy but still allow the machine learning technology to carry out work that will benefit the hospital.
The partnership with Moorfields is not DeepMind’s first foray into the medical sector. Google’s AI arm is also involved in the analysis of 1.6 million patent records held by Royal Free NHS Trust , despite there being some controversy over the company's access to such sensitive data .
DeepMind has previously gained acclaim after being used to create AlphaGo, an AI system that can beat champion human players of the highly complex Go board game that has often been described as something that machine learning systems cannot figure out.
Yet as AI technology gathers pace and becomes more advanced and easier to deploy thanks to the cloud , it is appearing in a wide range of diverse use cases, from unmanned coffee shops in London to 3D-printed driverless cars that can hold conversations with passengers .

 

 2 

China to restrict online news sites from using social media as source (1.02/2)

China recently announced a crackdown on using social media as a source for news reports, without properly verifying and fact-checking them first, in a report by state media agency Xinhua.
This is a recent move by the Cyberspace Administration of China, which is also reportedly an effort to help combat the release of false news. The announcement prohibits online media on reporting news using social media as a source without proper approval.
"It is forbidden to use hearsay to create news or use conjecture and imagination to distort the facts," according to Xinhua, translated by the South China Morning Post. It further stated:
Moreover, the report stated that a number of websites such as Sina.com, Ifeng.com, Caijing.com.cn, Qq.com and 163.com have been fabricating news and have been punished subsequently. However, it was not clarified what sanctions were given to the aforementioned websites.
China has taken a significant number of steps when it comes to its taking a stand against online services and social media. Back in 2015, China cracked down on the use of VPNs which can be used by many to access websites like Facebook, Twitter, and other similar websites. Shortly after, the country required everyone to use their real names online.
Source: South China Morning Post via CNet | Image via Wikimedia

 

 3 

Advanced Concepts of Java Object Serialization (0.02/2)

Serialization literally refers to arranging something in a sequence. It is a process in Java where the state of an object is transformed into a stream of bits. The transformation maintains a sequence in accordance to the metadata supplied, such as a POJO. Perhaps, it is due to this transformation from an abstraction to a raw sequence of bits that it is referred to as serialization by etymology. This article takes up serialization and its related concepts and tries to delineate some of its nooks and crannies, along with their implementation in the Java API.
Serialization makes any POJO persistable by converting it into a byte stream. The byte stream then can be stored in a file, memory, or a database.
Figure 1: Converting to a byte stream
Therefore, the key idea behind serialization is the concept of a byte stream. A byte stream in Java is an atomic collection of 0s and 1s in a predefined sequence. Atomic means that they are not further derivable. Raw bits are quite flexible and can be transmuted into anything: character, number, Java object, and so forth. Bits individually do not mean anything unless they are produced and consumed by the definition of some meaningful abstraction. In serialization, this meaning is derived from a pre-defined data structure called class and they are instantiated into an active entity called a Java object. The raw bit stream then is stored in a repository such as a file in the file system, array of bytes in the memory, or stored in the database. At a later time, this bit stream can be restored back into its original Java object in a reverse procedure. This reverse process is called deserialization.
Figure 2: Serialization
The object serialization and deserialization processes are designed to work recursively. That means, when any object serialized at the top of an inheritance hierarchy, the inherited objects gets serialized. The reference objects are located recursively and serialized. During the restoration, a reverse process is applied and the object is deserialized in a bottom-up fashion.
An object to be serialized must implement a java.io. Serializable interface. This interface contains no members and is used to designate a class as serializable. As mentioned earlier, all inherited subclasses are also serialized by default. All the member variables of the designated class are persisted except the members declared as transient and static ; they are not persisted. In the following example, class A implements Serializable. Class B inherits class A; as a result, B is also serializable. Class B contains a reference to class C. Class C also must implement Serializable interface; otherwise, java.io. NotSerializableException will be thrown at runtime.
In case you want to use a single object read to or write from a stream, use the readUnshared and writeUnshared methods instead of readObject and writeObject , respectively.
Observe that any changes in the static and transient variables are not stored in the process. There are a number of problem with the serialization process. As we have seen, if a super class is declared serializable, all the sub classes also get serialized. This means, if A inherits B inherits C inherits D... All the objects would be serialized! One way to make fields of these classes non-serializable is to use the transient modifier. What if we have, say, 50 fields that we do not want to persist? We have to declare those 50 fields as transient! Similar problems can arise in the deserialization process. What if we want to deserialize only five fields rather than restore all 10 fields serialized and stored previously?
There is a specific way to stop serialization in the case of inherited classes. The way out is to write your own readObject and writeObject method as follows.
A serializable class recommends declaring a unique variable, called serialVersionUID , to identify the data persisted. If this optional variable is not supplied, JVM creates one by an internal logic. This is time consuming.
Compile to create the class file:
The output would be like what's shown in Figure 3.
Figure 3: Results of the compiled class file
In a nutshell, a serialization interface needs some change with better control in the serialization and deserialization process.
An externalizable interface provided some improvement. But, bear in mind, the automatic implementation of a serialization process with the Serializable interface is fine in most cases. Externalizable is a complementary interface to allay many of its problems where better control over serialization/deserialization is sought.
The process of serialization and deserialization is pretty straightforward and most of the intricacies to storing and restoring an object are handled automatically. Sometimes, is may happen that the programmer needs some control over persistence process; say, the object to be stored needs to be compressed or encrypted before storing, and similarly, decompression and decryption need to happen during the restoration process. This is where you need to implement the Externalizable interface. The Externalizable interface extends the Serializable interface and provides two member functions to override by the implemented classes.
The readExternal method reads the byte stream from ObjectInput and writeStream writes the byte stream to ObjectOutput. ObjectInput and ObjectOutput are interfaces that extend the DataInput and DataOutput interface, respectively. The polymorphic read and write methods are called to serialize an object.
Externalization makes the serialization and deserialization processes much more flexible and give you better control. But, there are a few points to remember when using Externalizable interface:
According to the preceding properties, any non-static inner class is not externalizable. The reason is that the JVM modifies the constructor of the inner classes by adding a reference to the parent class at the time of compilation. As a result, the idea of having a no-argument constructor is simply inapplicable in case of non-static inner classes. Because we can control what field to persist and what not to with the help of the readExternal and writeExternal methods, making a field non-persistable with a transient modifier is also irrelevant.
Serialization and Externalizable is a tagging interface to designate a class for persistence. The instances of these classes may be transformed and stored in byte stream storage. The storage may be a file on disk or database, or even transmitted across a network. The serialization process and Java I/O stream are inseparable. They work together to bring out the essence of object persistence.

 

 4 

Security Think Tank: Biometrics have key role in multi-factor security (0.01/2)

In offering an additional method of authentication , biometrics provide an extra factor of security. This represents a significant opportunity for organisations to reduce their reliance on traditional passwords and their inherent flaws, not least of which is that users write them down.
However, although biometrics accordingly offer an attractive proposition, there are limitations.
First, biometrics may not be secret. For example, fingerprint authentication is the most popular biometric method, yet people’s fingerprints are everywhere. 
Second, biometric data is personally sensitive, and the handling of this data represents a significant risk in itself.
When looking at the privacy of biometric data, it is important to understand how it tends to be used. A scan will take specific data points and record them in a format that is appropriate for that supplier. The data should then be encrypted so that if it is subsequently compromised and decrypted it is likely to be of limited use.  
More dangerous is when more identification information than necessary is taken – full fingerprints, full iris scans, complete voice analysis, etc. If this information is compromised, then a much larger data set may be leaked, which could be used to defeat other authentication schemes reliant on that particular biometric attribute.

 

 5 

Windows 10 Anniversary Update Slated For Aug. 2 (0.01/2)

Mark your calendars, Windows users. Microsoft has confirmed the Windows 10 Anniversary Update is slated for a public rollout on Aug. 2.
Windows 10 was officially released on July 29, 2015. Almost one year later, Microsoft reports over 350 million devices running Windows 10 -- an increase of 50 million since the last device count in May 2016.
Customer engagement is also high, with users spending more than 135 billion hours on the OS.
To celebrate the one-year mark, Redmond is releasing one of the biggest updates to arrive on Windows 10 since its public rollout. The Anniversary Update includes new features for businesses and consumers.
[More on Windows 10: Microsoft paid out $10,000 for a forced OS upgrade .]
Security is a priority in the coming update. Two major security features arriving on Aug. 2 are Windows Defender and Windows Hello for apps and websites in Microsoft's efforts to eliminate the password.
Biometric authentication system Windows Hello can be used to log in to apps and websites within Microsoft Edge. As part of the Anniversary Update, Windows users can also use Windows Hello to unlock their PCs using companion devices.
For individual users, improvements to Windows Defender will include an option to automatically schedule quick, regular PC scans and receive alerts and summaries if threats are detected.
Enterprise customers will receive Windows Defender Advanced Threat Protection , which is designed to detect, investigate, and respond to advanced threats. Businesses will be protected from accidental data leaks with Windows Information Protection, which lets corporations separate personal and business information to better protect sensitive data.
Cortana, which first arrived on the desktop in Windows 10, will now be available above the lock screen so you can set reminders or play music without unlocking it. Cortana will also save and recall important information like frequent flier numbers, and give notifications across all devices where it is present.
Windows Hello isn't the only improvement arriving in Microsoft Edge. The browser will come with power-saving upgrades like using less memory and fewer CPU cycles, and lessening the affects of background activity. Microsoft has already touted the lasting power of Edge , and this indicates it's doing more to preserve users' battery life.
Edge will also be updated with extensions including the Pinterest "Pin It" Button, Amazon Assistant, LastPass, AdBlock, and AdBlock Plus in the Windows Store. It'll also have a new accessibility architecture to support modern web standards like HTML5, CSS3, and ARIA.
Microsoft is also working to improve digital pen capabilities with Windows Ink, a central hub for using the pen in Windows 10. You'll be able to use Windows Ink to take notes, draw, or sketch on screenshots. Smart Sticky Notes help you remember tasks and suggest directions.
Some of the core apps in Windows 10 have been updated to include features to support inking. You can handwrite notes in Office or Edge, or draw custom routes on the Maps app.
Windows 10 has been available as a free upgrade since it launched last summer, but the clock is ticking for anyone still running older versions of Windows.
The Anniversary Update will be released a few days after Microsoft stops offering Windows 10 for free to current users of Windows 7, Windows 8, and Windows 8.1. If you want the upcoming features at no cost, be sure to upgrade to Windows 10 before July 29.

 

 6 

Understanding Gradle, the Android Build System (0.01/2)

There are various world-class build solutions available for developers. Ant and Maven come to mind for Java developers, but as any Android developer would know, the de facto buildset for Android development, Android Studio, is Gradle.
Gradle is an easily customizable build system that supports building by a convention model. Gradle is written in Java, but the build language is Groovy DSL (domain spec language). Gradle not only supports multi-project builds, but it also supports dependencies like Ivy and Maven. Gradle also can support building non-Java projects.
You have a few ways to get Gradle:
Gradle has a build file, build.gradle. The build file contains tasks, plugins, and dependencies.
A task is code that Gradle executes. Each task has a lifecycle and properties. Tasks have 'action,' which is code that is going to execute. Task actions are broken into two parts: 'first action' and 'last action.'
Task also have dependencies—one task can depend on the other. This allows specifying an order in which tasks are executed.
Gradle has the concept of build phases. There is an initialization phase that is used to configure multi-project builds. The configuration phase involves executing code in the task that is not the action. The execution phase involves actually executing the task.
To declare a simple dependency, you can use Task.dependsOn. Sometimes, you might need a more explicit declaration, such as mustRunAfter, where a task can run only after another task has run. Likewise, there also is support for shouldRunAfter where the execution order is not forced. finalizedBy is also supported.
Here is a sample Gradle build file for Android that shows that jcenter is used as the repository and a dependency on Gradle 1.5 library.
// build.gradle (project)
The module Gradle build file supports configuring build settings, like compileSdkVersion, buildToolsVersion, default configuration, build types, and dependencies.
//build.gradle module
In this article, you learned about Gradle, a popular build system that also is used for Android development. I hope you have found this information useful.
Vipul Patel is a technology geek based in Seattle. He can be reached at vipul.patel@hotmail.com. You can visit his LinkedIn profile at https://www.linkedin.com/pub/vipul-patel/6/675/508 .

 

 7 

Using the Executor Framework to Deal with Java Threads (0.01/2)

Threads provide a multitasking ability to a process (process = program in execution). A program can have multiple threads; each of them provide a unit of control as one of its strands. Single threaded programs execute in a monotonous, predictable manner. But, a multi-threaded program brings out the essence of concurrency or simultaneous execution of program instruction where a subset of code executes or is supposed to execute in parallel mode. This mechanism leverages performance, especially because modern processing workhorses are multi core. So, running a single threaded process that may utilize only one CPU core is simply a waste of resources.
Java core's APIs includes a framework called Executors Framework, which provides some relief to the programmer when working in a multi-threaded arena. This article mainly focuses on the framework and its uses with a little background idea to begin with.
Parallel execution requires some hardware assistance, and a threaded program that brings out the essence of parallel processing is no exception. Multi-threaded programs can best utilize multiple CPU cores found in modern machines, resulting in manifold performance boost. But, the problem is that maximum utilization of multiple cores requires a program's code to be written with parallel logic from the ground up. Practically, this is easier said than done. In dealing with simultaneous operations where everything is seemingly multiple, problems and challenges are also multi-faceted. Some logics are meant to be parallel whereas some are very linear. The biggest problem is to balance between them yet keep up with maximal utilization of processing resources. Parallel logic is inherently parallel, whose implementation is pretty straightforward, but converting a semi-linear logic into an optimal parallel code can be a daunting task. For example, the solution of 2 + 2 = 4 is quite linear but the logic to solve expression such as (2 x 4) + (5 / 2) can be leveraged with parallel implementation.
Parallel computing and concurrency, though closely related, are yet distinct. This article uses both words to mean same thing to keep it simple.
Refer to https://en.wikipedia.org/wiki/Parallel_computing to get a more elaborate idea on this.
There are many aspects to be considered before modeling a program for multi-threaded implementation. Some basic questions to ask while modeling one are:
When creating a task (task = individual unit of work) , what we normally do is either implement an interface called Runnable or extend the Thread class:
And, create the task as follows:
And then execute each task as follows:
To get a feedback from individual task, we have to write additional code. But, the point is that there are too many intricacies involved in managing a thread execution, such as creation and destruction of a thread, has a direct bearing on the overall time required to start another task. If it is not performed gracefully, unnecessary delay in the start of a task is certain. A thread consumes resources, so multiple threads may consume multiple resources. This has a propensity to slack overall CPU performance; worse, it can crash the system if the number of threads exceeds the permitted limit of the underlying platform. It also may happen that some thread consumes most of the resources leaving other threads starved, or a typical race condition. So, the complexity involved in managing thread execution is easily intelligible.
The Executor Framework attempts to address this problem and bring some controlling attributes. The predominant aspect of this framework is to state a clear demarcation between the task submission from task execution. The executor says, create your task and submit it to me; I'll take care of the rest (execution details). The mechanics of this demarcation is attributed to the interface called Executor under the java.util.concurrent package. Rather than creating thread explicitly, the code above can be written as:
and then
Calling the executor method does not ensure that the thread execution is initiated; instead, it merely refers to a submission of a task. The executor takes up the responsibility on behalf, including the details about the policies to adhere to in the course of execution. The class library supplied by the executor framework determines the policy, which, however, is configurable.
There are many static methods available with the Executors class (Note that Executor is an interface and Executors is a class. Both included in the package java.util.concurrent ). A few of the commonly used are as follows:
All of these methods return an ExecutorService object.
The ExecutorService interface extends Executor and provides necessary methods to manage execution of threads, such as the shutdown () method to initiate an orderly shutdown of threads. There is another interface, called ScheduledExecutorService , which extends ExecutorService to support scheduling of threads.
Refer to Java Documentation for more details on these methods and other service details. Note that the use of executor is highly customizable and one can be written from scratch.
Let's create a very simple program to understand the use of an executor.
The Executor Framework is one of much assistance provided by the Java core APIs, especially in dealing with concurrent execution or creating a multi-threaded application. Some other assisting libraries useful for concurrent programming are explicit locks, synchronizer, atomic variables, and fork/join framework, apart from the executor framework. The separation of task submission from task execution is the greatest advantage of this framework. Developers can leverage this to reduce many of the complexities involved in executing multiple threads.

 

 8 

Amazon Announces Immediate Availability of Asia Pacific (Mumbai) Region

On June 27 th , Amazon announced the immediate availability of their 6th AWS Region in Asia Pacific. This region is in Mumbai, India and it joins other regions in Asia Pacific including Beijing, Seoul, Singapore, Sydney, and Tokyo. With the addition of Mumbai, Amazon is now up to 35 Availability Zones across 13 geographic regions worldwide.
At launch, the Mumbai region will host a variety of AWS services including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), and Amazon Relational Database Service (Amazon RDS). Missing from the initial list of services available are Amazon IoT, Amazon EC2 Container Service (ECS) and Amazon Kinesis Firehose. A complete list of services provisioned can be found on the Asia Pacific Amazon Global Infrastructure page.
Amazon has 75 000 active customers  in India including  Ola Cabs and NDTV  who have participated in early previews of the Mumbai AWS region.
Ola Cabs provides a mobile application that allows customers to arrange for car service in India. Ankit Bhati, CTO and co-founder of Ola Cabs, describes how AWS has contributed to the success of their mobile offering:
Technology is a key enabler, where we use AWS to drive supreme customer experience, and innovate faster on new features & services for our customers. This has helped us reach 100+ cities & 550K driver partners across India. We do petabyte scale analytics using various AWS big data services and deep learning techniques, allowing us to bring our driver-partners close to our customers when they need them.
AWS has also allowed Ola Cabs to increase the velocity of developing their microservices platform. By using AWS, Ola Cabs deploys changes more than 30 times per day across 100s of low latency APIs that service millions of requests per day.
NDTV, a media company founded in 1988, provides TV, video and mobile content to consumers around the world. NDTV’s relationship with Amazon goes back to 2009 when they started to deliver video and web content using AWS. NDTV has taken advantage of Amazon’s ability to scale AWS services. This scalability was required during the 2014 Indian election, when NDTV received a 26 fold increase in traffic, resulting in 13 billion hits on Election Day. NDTV welcomes Amazon’s increased presence in India. Kawaljit Singh Bedi, CTO of NDTV Convergence, further explains :
Our web and mobile traffic has jumped by over 30% in the last year and as we expand to new territories like eCommerce and platform-integration we are very excited on the new AWS India region launch. With the portfolio of services AWS will offer at launch, low latency, great reliability, and the ability to meet regulatory requirements within India, NDTV has decided to move these critical applications and IT infrastructure all-in to the AWS India region from our current set-up.
In a recent blog post , Werner Vogels – CTO at Amazon describes some of the factors driving Amazon’s additional investment in India:
A region in India has been highly sought after by companies around the world who want to participate in one of the most significant economic opportunities in the world – India, a rising economy that holds tremendous promise for growth, a thriving technology hub with a rich eco-system of technology talent, and more. We believe in the Indian market and are investing for the long term. With the Mumbai Region, we look to better serve end-users in India. We believe that with the launch of the Mumbai Region, AWS will enable many more enterprise customers and startups in India to not just reduce the cost of their IT operations, but embark on transformational innovations rapidly in critical new areas such as big data analysis, Internet of Things, and more.
Amazon is not the only public cloud provider to aggressively pursue the Indian market. Microsoft has 3 Azure regions in India, including Central India (Pune), South India (Chennai) and West India (Mumbai).

 

 9 

. NET Core 1.0 Released

The release of. NET Core 1.0 has been announced  by Microsoft’s Rich Lander. This brings to fruition the first steps of Microsoft’s plans to make. NET and its supporting technologies freely available under an open source license.
In this release, developers have improved. NET Core’s offline support so that core libraries are cached locally. If your application targeting. NET Core does not use the Internet, it will not need network access to run.
The. NET Core 1.0 release notes are available on GitHub. Along with the release of. NET Core there will be a new online web portal created by Microsoft to provide documentation and tutorials on how to use. NET. Those with Visual Studio 2015 Update 3 can install. NET Core Tools for Visual Studio to use VS2015 in conjunction with. NET Core. It should be noted that while. NET Core itself is considered version 1.0, the SDK is still labeled Preview 2. According to Lander, this is because the tools in the SDK are still being worked on, however it is not an indictment of the quality. Note that telemetry is enabled for the SDK only, although it is configurable.
. NET Core currently supports console apps—additional software is required to support additional apps:
Developers who want to use F# with the latest tools should note that there is currently an issue preventing the use of F# with the SDK Preview 2. Work is actively being done (based on the GitHub comments) but the issue remains unresolved as of this writing.
Beyond VS2015, all developers interested in. NET Core can download the SDK appropriate for their system. This release of  NET Core now supports 8 Linux distributions, Mac OS X 10.11, and Windows Nano Server TP5 in addition to Windows 7 (or newer) and Server 2012 R2 (or newer).

 

 10 

Evolving Glasgow’s Future City

Two years ago, Glasgow city council beat London, Peterborough and Bristol with its Future City demonstrator proposal, showcasing smart city projects, and received £24m from the Technology Strategy Board.
It used the funding to demonstrate technology-led initiatives to integrate transport, communications and other infrastructure to improve the local economy, increase quality of life and reduce impact on the environment.
Gary Walker, programme director for Future City Glasgow, says the programme had four key themes: active travel, aimed at encouraging people to walk and cycle; transport; public safety; and energy.
According to the Future City of Glasgow feasibility study, which look at a system to manage Glasgow, the city has the lowest life expectancy in the UK. Female life expectancy at birth (78 years) in Glasgow is greater than male life expectancy (71.6 years), but both were much lower than the UK national average for females (82.3 years) and males (78.2 years) in 2010.
Among the pilot schemes the city has conducted through Future City demonstrator are a number of initiatives to encourage people to cycle or walk.
Walker says of the active travel demonstrator: “We have a relatively low percentage of people who cycle. We are not as flat as Amsterdam or Copenhagen, but we wanted to allow commuters to map their journeys and share them and rate them.”
Through the demonstrator project, Glasgow created a number of mobile phone apps for crowdsourcing this information.
Walker says the city could also use the data collected to inform future infrastructure decisions. “We don’t have an infinite amount of money to create cycle lanes,” he says. “But by analysing popular cycle routes, city planners could ascertain where to build new ones.”
The council also created a walking app for places of interest and parks, which uses GPS on smartphones to get people walking. “The communities picked up on this and we worked to enable them to add their local knowledge,” Walker adds.
He says the app was highly configurable, enabling people to localise it by bolting on additional functionality. “One of the communities enhanced the original walking app by adding historical pictures and stories,” he says.

 

 11 

CW@50: The story of the internet, and how it changed the world

Previously, we have explored a century and a half of British innovation in networking , and learned how one company – going by various names before eventually settling on BT – sat at the core of the first telegraph networks that connected Britain to the world, just as it sits at the core of the modern fibre network that accomplishes the exact same task.
But what we have not yet examined is the story behind how that network is used, as the basis for an invention that in human history is probably comparable to agriculture, the wheel or writing: the internet .
In the popular imagination, the internet ‘began’ in 1991, and CERN scientist Tim Berners-Lee takes the credit. This could not really be much further from the truth; Berners-Lee invented the World Wide Web , which is actually the space on the internet where documents formatted in hypertext mark-up language (HTML), known more popularly as web pages or sites, reside and are accessed. This is very important, and without it, modern life as we know it would be unimaginable – but it is not really the internet.
The roots of the internet actually go back to a few years before Computer Weekly, and one of the foundations of the internet lies in the UK, at the National Physical Laboratory (NPL) in Teddington, south-west London, where scientist Donald Davies independently hit on one of the core concepts establishing the internet in the early 1960s.
Davies – who back in the 1940s was said to have found a number of errors in Alan Turing’s work, much to Turing’s irritation – based his work on the idea that computer network traffic was chatty, marked by long silences followed by sudden bursts of data, as opposed to the always-on nature of telephone traffic.
It was Davies who coined the term packet switching for the concept of dividing this data into little packets that could be send independently, and not even necessarily over the same path, to their destination. His work at the NPL, along with that of other pioneering computer scientists such as Len Kleinrock and Paul Baran, fed directly into the creation of the US military-owned Arpanet at the Advanced Research Projects Agency (Arpa).

 

 12 

European omni-channels: Hype or reality?

Opinions about the speed at which customer relationship management (CRM) is evolving into an “omni-channel experience” vary hugely across Europe, depending on whom you ask.
The prevailing view is that competitive imperatives mean there is a growing need to serve customers through several channels using methods that extend beyond traditional CRM tools. This includes the ability to gather sales data from multiple channels in real time to provide a more holistic view of the customer.
Many industry players note that the move towards omni-channel is driven not by suppliers or IT providers, but by the customers that use these products. But is the concept of an omni-channel customer experience still in the hype phase or is it now closer to reality?
“The hype mostly fails to match reality. While most companies are improving their ability to engage customers through different channels, it remains difficult to share information and context across them,” said Sheryl Kingstone, research director at 451 Research.
“Certain verticals, such as retailers, have embraced newer channels of communications, such as online chat, but only a few industry-leading companies ensure a complete cross-channel memory.”
Richard Kolodynski, who was appointed by iVend Retail in March 2015 to expand the company’s presence in European markets, further commented that omni-channel has become a “buzzword du jour ” for many retailers and IT suppliers.
“It’s become so widely used that many are looking for alternative terms to use: ‘omni-commerce’, ‘connected retail’ or ‘total retail’ to name but a few,” said Kolodynski.
“However, for all the discussion surrounding omni-channel, the truth is that few retailers can currently deliver a clean and consistent experience across all their platforms. The store in particular feels like it is disconnected from the digital world, even though smartphones are widely used by consumers in their bricks-and-mortar journeys.”

 

 13 

Microsoft Streamlines Visual Studio Installation

Microsoft is refactoring its Visual Studio installation to be smaller, faster, more reliable and easier to manage.
As Microsoft moves to become all things to all developers, the company has undergone some growing pains in terms of making that happen via its core toolset, Visual Studio.
The move to take its. NET platform cross-platform and to support all different kinds of development from the Visual Studio toolset has bloated the size of the tools. And now Microsoft is moving to provide developers with a streamlined acquisition experience for Visual Studio, based on the type of development they are involved in.
At its Build 2016 conference, Microsoft delivered the first preview of the next version of Visual Studio and gave an early look at a lightweight acquisition experience with Visual Studio.
"The challenges we are seeing with our customers is that as we pivoted to support any developers building for any applications on any platform, the application model matrix is really exploding," Julia Liuson , Microsoft's corporate vice president of Visual Studio, told eWEEK .
If you just think about the mobile space alone, there's the Android software development kit (SDK), the Cordova tools, the different emulators and more that a developer can use, she said. The overall collection of tools, SDKs and emulators is a very large set.
Combining that large tool set with customers who have a habit of simply checking the "Select All" box when installing a product can lead to some disgruntled customers.
Indeed, according to Liuson, with customers who download the entire product on their machine, Microsoft frequently gets feedback about the size of the download and questions of why Visual Studio is now 40 gigabytes.
That's one of the problems the company is tackling—how to provide customers with a far more optimized experience for the particular workload that they are working on.
For instance, if developers just want to do Python programming, they don't really need all of the Visual Studio mobile tools or the cloud tools. If they're doing Xamarin development, they don't necessarily need all of the cloud and server development offerings.
"We're working on more workload-oriented acquisition experiences for our customers," Liuson said. "So when the product comes down to their machine, it's easily updateable and they can get the pieces they need easily. And what they decide not to use they can get rid of easily. "
This is a key experience Microsoft is working on for the next release of Visual Studio, code-named Visual Studio 15.
"We're hoping that with most of the users, the amount of stuff that they install to get started should be a lot smaller than what they do today," Liuson said.
In a post on the Visual Studio Blog , Tim Sneath, principal lead program manager for the Visual Studio Platform at Microsoft, said based on feedback Microsoft got from developers at Build and from other research, Microsoft has come up with a list of 17 workloads the company is building for developers to choose from in next version of Visual Studio.
Those workloads are:
1. Universal Windows Platform development
2. Web development (including ASP. NET, TypeScript and Azure tooling)
3. Windows desktop app development with C++
4. Cross-platform mobile development with. NET (including Xamarin)
5.. NET desktop application development
6. Linux and Internet of things development with C++
7. Cross-platform mobile development with Cordova
8. Mobile app development with C++ (including Android and iOS)
9. Office/SharePoint add-in development
10. Python Web development (including Django and Flask support)
11. Data science and analytical applications (including R, F# and Python)
12. Node.js development
13. Cross-platform game development (including Unity)
14. Native Windows game development (including DirectX)
15. Data storage and processing (including SQL, Hadoop and Azure ML)
16. Azure cloud services development and management
17. Visual Studio extension development
"You can select one or more of these when setting up Visual Studio, and we’ll pull down and install just the relevant Visual Studio components that you need to be productive in that stack," Sneath said.
Liuson noted that Microsoft is very sensitive to the fact that because it is making such a major change to a core part of its product experience, there will be a lot of feedback. And the company wants to hear customers' perspectives and address any concerns people might have.
"Even though this is not a new product feature, it's such an important way for people to access all the features that we do offer," she said. "So this is actually a pretty important infrastructure change that the engineering team is working through. And it's a fairly big and disruptive change from an engineering angle. "
Sneath's post goes on to inform developers on how they can install Visual Studio faster and leaner. He also provides details on how the new installer will work.

 

 14 

Eclipse Foundation Ships Neon Release Train

The Eclipse Foundation shipped its eleventh annual release train, featuring 84 projects and 69 million lines of code from nearly 800 developers.
The Eclipse Foundation on June 22 announced the availability of its Neon release, the eleventh annual coordinated release train of open-source projects from the Eclipse community.
The Neon release includes 84 Eclipse projects consisting of more than 69 million lines of code, with contributions by 779 developers, 331 of whom are Eclipse committers. Last year's release train, the Mars release, had 79 projects.
"It takes a great amount of coordination and effort by many developers within our community to ship a release that is on-time," said Mike Milinkovich, executive director of the Eclipse Foundation, in a statement.
Ian Skerrett, vice president of marketing at the Eclipse Foundation, said one of the key focus areas for the Neon release was improving Eclipse's JavaScript development tooling. The foundation upgraded the JavaScript integrated development environment (IDE) in the Eclipse platform known as JavaScript Development Tools, or JSDT.
"There's been a lot of work on improving the usability and performance of our JavaScript tooling, including support for the latest version of JavaScript," Skerrett said. "That team did a lot of work on the whole JavaScript tool chain and we have integration with JavaScript build systems like Grunt and Gulp that JavaScript developers use. We have integration with the Chromium V8 debugger so you can have a tight compile and debug cycle. We also improved our support for Node.js development to make it easier to build and debug Node.js applications. "
In addition, Eclipse JSDT 2.0 includes new tools for JavaScript developers, including a JSON editor along with the support for Grunt/Gulp and a new Chromium V8 Debugger.
The Neon release also features an updated PHP Development Tools Package (PDT). The new Eclipse PDT 4.0 release for PHP developers provides support for PHP 7 and improved performance.
Another key area of focus was improving the lot of Java developers on the Eclipse platform, Skerrett said. In the core Eclipse platform and the Java Development Tools project, the foundation added HiDPI support, which supports advanced monitors with graphics cards in them. That support is on Mac, Windows and Linux.
There are also updates to JDP, such as auto-save, automatically saving things as developers type into the IDE. And there are improvements to JDT's Content Assist so that when developers are using it they can highlight search fields that they put in, as Content Assist now highlights matched characters and provides substring completion.
Other improvements and additions include updates to Automated Error Reporting. The Eclipse Automated Error Reporting client can now be integrated into any third-party Eclipse plug-in or stand-alone Rich Client Platform (RCP) application.
The Neon release also features improved support for Docker Tooling and introduces the Eclipse User Storage Service (USS). The Eclipse USS is a new storage service that enables projects to store and retrieve user data and preferences from Eclipse servers creating a better user experience (UX) for developers.
"Neon noticeably returns focus to essential coding improvements, like editor auto-save, HiDPI support, better dark theme and more intelligent Java Content Assist," said Todd Williams, vice president of Technology at Genuitec , a founding member of the Eclipse Foundation that offers tools supporting the Eclipse platform such as MyEclipse and Webclipse. "These changes, along with Neon's increased responsiveness, will help ensure that Eclipse remains competitive in its core market segments. "

 

 15 

Twilio IPO May Be Key Indicator for Other Unicorns in 2016

NEWS ANALYSIS: A good response from investors June 23 could help determine whether companies such as Dropbox, Uber and others decide to test the waters this year.
Information technology IPOs have been AWOL halfway through 2016. This has had analysts, investors and market watchers scratching their heads and wondering what the heck is going on.
There certainly are plenty of quality companies—Uber, Airbnb and Dropbox, for merely three examples—rising up that could consider an initial public offering, no question about that. And there still is a ton of money being invested in new and relatively new companies every week. We at eWEEK who report on such venture capital movements know all about this.
So why are IPOs not happening? There are two reasons: First, the markets currently are generally perceived to be too volatile (or hostile, as some people would put it)—especially as hair-trigger automatic trading on projections have become the norm. Second, larger companies are swallowing up smaller ones at such a breakneck pace that they don't have time to consider going public.
Last year, $20 billion worth of tech companies went private, according to Bulger Partners, a mergers and acquisitions advisory firm. On the other side, tech IPOs raised a mere $21 billion. Bulger Partners reported a whopping $232 billion worth of M&A transaction value for 2015 alone.
IPOs a Risky Proposition
IPOs have to be successful on Day 1, that's a fact. If they are not, the walls often can come crashing in very quickly, and fledgling startups need to have nerves of vanadium to weather such a potential crisis.
But let's put all of that aside for now, because there has been a development. Twilio Inc., a small but highly regarded startup whose cloud service enables developers to build and operate real-time communications within software applications, is making a breakthrough of sorts: It is going public June 23 at $15 per share.
Twilio allows software developers to programmatically make and receive phone calls and send and receive text messages using its Web-service APIs. Twilio's services, which go a long way toward keeping bugs out of software—and are especially valuable in rapid iteration-type environments—are accessed over HTTP and billed based on usage.
As of last year, more than 560,000 software developers were using Twilio in their daily production work.
The San Francisco-based company raised more than it expected—about $150 million, or about $11 per share—in its initial private offering June 22. That's a good sign for the dozens of other so-called unicorns that have been valued at more than $1 billion through private fundraising.
Twilio Will Start Trading on Nasdaq at $15
Twilio said June 22 that it will start trading June 23 on the NYSE at $15 a share, above the $12-to-$14 range the company had previously indicated. The June 22 investors at $11 no doubt are pleased with that declaration.
The deal, which will be the first Silicon Valley tech IPO of the year, is a closely watched test case to determine whether the market will be receptive of future tech IPOs this year. A good response June 23 could help determine whether companies such as Dropbox, Uber and others decide to test the IPO waters themselves later this year.
The offering of the San Francisco-based company comes as U. S.-listed IPOs are on track for their worst year in terms of numbers since the financial crisis year in 2008.

 

 16 

Google Seeks to Spur Kids' Interest in Coding With Project Bloks

A Google research project seeks to build on years of theory and research in the area of tangible programming to interest children in programming at an early age.
Google, in collaboration with design firm IDEO and a researcher from Stanford University, are collaborating on an effort dubbed Project Bloks that is designed to get kids started on programming at a very young age.
The project is inspired by previous and long-standing academic work and research in the area of so-called tangible programming—in which children learn basic programming concepts by manipulating physical objects like wooden blocks.
One early example of such work is Tern , a tangible programming language developed several years ago by a graduate student at Tufts University that gave children a way to build basic programs by connecting a set of interlocking blocks together.
Each of the blocks represented a specific programming instruction like 'start', 'stop', 'turn' or 'move left,' which when put together created a set of basic instructions for a robot to follow.
Google's Project Bloks seeks to build on such research by creating what it described on its Research Blog as an open hardware platform that will give designers, developers, educators and others a way to build "physical coding experiences" for children.
As a first step in this direction, the company has built a working prototype of a system for tangible programming consisting of three components—a "Brain Board," "Base Boards" and programmable "Pucks".
Google's pucks function like the blocks in Tern. Each puck can be programmed with a different function and be placed on the Base Board, which then reads the instruction or instructions on the puck via a capacitive sensor, the company said.
Multiple Base Boards can be connected together in different configurations to create various programs. When the Brain Board is attached to the connected Base Boards it reads the instructions contained in each board and sends it via Bluetooth or WiFi to connected devices such as robots or toys, which then execute the instructions.
"As a whole, the Project Bloks system can take on different form factors and be made out of different materials," Steve Vranakis and Jayme Goldstein, two members of Google's Creative Lab said in the Research Blog .
For instance, a puck can be devised out of nothing but a sheet of paper and some conductive ink, according to the two Google researchers.
"This means developers have the flexibility to create diverse experiences that can help kids develop computational thinking—from composing music by using simple functions to playing around with sensors or anything else they care to invent," they said.
Working with IDEO, Google has developed a Coding Kit, which is a sort of proof-of-concept system for developers to use as a reference.
Project Bloks is one of two initiatives that Google announced this week pertaining to children and education. The other is a partnership with digital education company TES Global.
Under the effort, Google for Education has set up a new portal on the tes.com Website that will let teachers learn how to use Google Expeditions' virtual reality tours in the classroom. The arrangement with TES will give teachers a way to more easily find and share lessons that are compatible with Google Apps for Education and access free training on Google tools, TES said in a statement.

 

 17 

Chan Zuckerberg Initiative Selects Andela for First Major Investment

Andela, a company that pairs developers in Africa with opportunity in the U. S., has been selected as the first major investment of the Chan Zuckerberg Initiative.
"Brilliance is evenly distributed, but opportunity is not. " 
That's the founding principle behind Andela, a 2-year-old startup that's bringing together brilliant developers in Africa with opportunities in America—and that today announced it's the first major funding recipient of the Chan Zuckerberg Initiative.
CZI, founded by Facebook founder Mark Zuckerberg and his wife, Pricilla Chan, led Andela's Series B funding with an investment of $24 million.
"The round represents a huge vote of confidence from some of the most respected names in technology," Andela CEO and co-founder Jeremy Johnson wrote in a June 16 letter to investors and advisers, shared on the Andela blog. "Not only is it a vote for Andela, but it's also a recognition of the caliber of software developers and human beings that make up the Andela Fellowship. "
Johnson also welcomed investor GV, formerly Google Ventures, to the Andela family.
Zuckerberg acknowledged the investment in a post on his Facebook page.
"I was lucky to be born in a wealthy country where I had access to computers and the internet. If I had been born somewhere else, I'm not sure I would have been able to start Facebook—or at least it would have taken a lot longer and been more difficult," he wrote.
Zuckerberg added that the talent-opportunity gap is among the most dramatic in Africa, where six out of every 10 Africans are younger than 35, and in some places more than half of them are without work.
"Priscilla and I believe in supporting innovative models of learning wherever they are around the world—and what Andela is doing is pretty amazing," Zuckerberg added.
Andela has offices in Nairobi, Kenya, and Lagos, Nigeria, where it employs close to 200 engineers. Its four-year Fellows program is highly selective—to date, it has accepted less than 1 percent of the candidates from the more than 40,000 applications it has received.
Once selected, Fellows receive 1,000 hours of training over six months and then are paired with a U. S. company in need of development help. Andela educates the Fellow about the Andela customer company's culture and needs and then flies the Fellow to that company's headquarters for two weeks, to build trust with the team members and strategize a roadmap. After that, the U. S. team and the Andela developer communicate online daily.
"They're working in your time zone, communicating in your Slack channels and participating in your daily stand-ups," explains the Andela site, emphasizing its goal of providing as friction-free a service as possible.
The U. S. company gets a great developer; a brilliant person, with fewer local opportunities, gets a great job; and the hiring process is less stressful and time-consuming for the hiring company, as Andela does all the screening and interviewing on its end, it states.
To date, Andela clients include Microsoft, IBM and Udacity.
Diversity and Success
Diversity is a proven contributor to success, as is the inclusion of women in working groups and particularly in leadership positions .
Andela has a goal that 35 percent of its software development team should be women, Christina Sass, one of Andela's four co-founders, told CNN Money , adding that it has been "very disciplined" in that effort. In the spring, it hosted an all-female boot camp in Kenya and made an effort to communicate to women's families that Andela is a safe place to work. Ultimately, 1,000 women applied, 41 were selected for the boot camp and nine were accepted into Andela, according to CNN. 
As part of Andela's vision to train 100,000 world-class developers over the next 10 years, on June 14 it announced three- and six-month internship programs in Lagos for creative thinkers, excellent problem solvers and people willing to "become the CEO" of their own work.
Benefits, it noted, include breakfast and lunch, a passionate working environment and, an opportunity to work with some of the brightest minds on the planet. "Oh," it added, "and a chance to change the world! "

 

 18 

Ruby On Rails Reaches 5.0

The latest version of Ruby on Rails has a new framework and API mode.
Rails 5.0 is being described by the developers as:
"without a doubt the best, most complete version of Rails yet. "
The two headline improvements are a new framework for handling WebSockets, and support for API mode.
The new framework, Action Cable, provides an integrated way to manage connections, a channels layer for server-side processing, and a JavaScript layer for client-side interaction. The developers say it makes designing live features like chat, notifications, and presence a lot easier, adding that it's what’s powering the features of Basecamp 3, if you want to see it in action.
Action Cable provides access to your entire Active Record and PORO domain model in your WebSockets work. The developers have added a new ActionController::Renderer system that you can use to render your templates outside of controllers, so you can reuse server-side templates for WebSocket responses.
In development, Action Cable runs in-process with the rest of your app. Doing this has meant the default development server has been switched from Webrick to Puma. The developers say that in production, you may well want to run Action Cable servers in their own processes, whcih is how it is used at Basecamp at scale.
The API mode is designed to give you a slimmed down version of Rails for client-side JavaScript or native applications that just need the backend to speak JSON. The developers say that while there’s still more work to be done on this feature, they feel they're off to a great start.
To be informed about new articles on I Programmer, sign up for our weekly newsletter,subscribe to the RSS feed and follow us on, Twitter, Facebook, Google+ or Linkedin.

 

 19 

Enterprises: Tear Down Your Engineering Silos

Silos stifle creativity and make it difficult to work on collaborative projects, even with a person sitting right next to you in the office. What's an engineer, IT pro, or CIO to do? Will Murrell, a senior network engineer with UNICOM Systems, knows a thing or two about silos. UNICOM develops a variety of software and other tools to work with IBM's mainframe, Microsoft Windows, and Linux. Murrell recently talked with InformationWeek senior editor Sara Peters about a new breed of engineers trying to break down corporate barriers for good.

 

 20 

Codenvy's Language Server Protocol Reduces Programmer Envy

Codenvy, Red Hat and Microsoft collaborate on new language protocol for developers to integrate programming languages across code editors and IDEs.
Codenvy, Microsoft and Red Hat announced on June 27 the adoption of a language server protocol project to provide a common way to integrate programming languages across code editors and integrated development environments (IDEs).
The companies announced the new protocol during the opening general session of the DevNation 2016 conference in San Francisco. The project originated at Microsoft, who introduced it to the Eclipse Che IDE platform project, announced earlier this year at the EclipseCon conference in Reston, Va. The new protocol extends developer flexibility and productivity by enabling a rich editing experience within a variety of tools for different programming languages.
"Historically, most programming languages have only been optimized for a single tool," Tyler Jewell, Codenvy CEO and Eclipse Che project lead, said in a statement. "This has prevented developers from using the editors they know and love, and has limited opportunities for language providers to reach a wide audience. With a common protocol supported by Microsoft, Red Hat and Codenvy, developers can gain access to intelligence for any language within their favorite tools. "
Jewell told eWEEK the "dirty problem" with development tools for the past decade has been that developers had to choose a programming language and then be stuck with the tooling available for that language—because the tooling capabilities are always bound to proprietary APIs and componentry that changes for each programming language.
"So if you wanted to change programming languages, you generally had to change your IDE," he said. "And if you have an IDE that you like, there's generally not an easy way to get multiple programming languages supported on it. "
However, the new Language Server Protocol makes it possible for any IDE to work with any programming language. So with that, developers can choose their tools and work with any programming language, and programming language authors can write their language as they see fit.
Jewell said the Language Server Protocol is an open-source project that defines a JSON-based data exchange protocol for language servers, hosted on GitHub and licensed under the creative commons and MIT licenses. By promoting interoperability between editors and language servers, the protocol enables developers to access intelligent programming language assistants—such as find by symbol, syntax analysis, code completion, go to definition, outlining and refactoring—within their editor or IDE of choice, he said.
The first two tools that are supporting this capability are Eclipse Che—the next-generation Eclipse IDE—and Microsoft's Visual Studio Code, Jewell said. Codenvy helped to achieve Eclipse Che support, and Microsoft, as originator of the protocol, put its engineers to work to get VS Code to support it.
"The Eclipse Che team and Red Hat have also announced they're adopting Visual Studio Code's Language Server Protocol—an open protocol that enables some of the rich editing features in VS Code," Joseph Sirosh, corporate vice president of the Data Group at Microsoft, said in a blog post. "This shows that the open-source contributions from VS Code are being adopted by tool and language providers, giving developers the flexibility to pair their favorite language with their favorite tools. "
"We have defined the common language server protocol after integrating the OmniSharp for C# and TypeScript servers into VS Code," Erich Gamma, a Microsoft Distinguished Engineer and leader of the Visual Studio Code project, said in a statement. "Having done a language server integration twice, it became obvious that a common protocol is a win-win for both tool and language providers: in this way, any language provider can make their language support available so that it is easily consumable by any tool provider. "
Before joining Microsoft in 2011, Gamma was a distinguished engineer at IBM where he was a key leader and contributor to the Eclipse platform and a leader of the Eclipse Java Development Tools project.
With the Language Server Protocol, programming language providers can support multiple tools across a variety of operating systems. And the project has created a language server registry, where language servers are published as part of a global registry, built by Codenvy as an Eclipse project and hosted by the Eclipse Foundation, to make language servers discoverable for any tool to consume, Jewell said.

 

 21 

IBM Adds New Bluemix OpenWhisk Tools for IoT Development

IBM added new tools for its Bluemix OpenWhisk serverless computing platform that utilizes Docker. OpenWhisk also features user interface updates.
IBM has announced a set of new tools for its Bluemix  OpenWhisk event-driven programming model, which uses Docker containers.
The new tools will enable developers to build intuitive applications that can easily connect into the Internet of things (IoT), as well as tap into advanced services such as cognitive, analytics and more—without the need to deploy and manage extra infrastructure, according to IBM.
"What OpenWhisk allows a developer to do is without any server infrastructure they upload their snippet of code, they choose when they want that code to run—like in response to something changing in the database in the cloud, or someone calling a Web URL—and then when that event occurs, the code gets run and IBM will auto-scale it for them," Mike Gilfix, vice president of Mobile & Process Transformation at IBM, told eWEEK .
"So we make sure that it scales to as much demand as they need and they only pay for the compute capacity that they need at the time that the code runs," he said.
Announced at DockerCon 2016 , IBM's new OpenWhisk tools—NPM Module and Node-RED—will enable developers to more rapidly build event-driven apps that automatically execute user code in response to external actions and events, according to the company.
Moreover, IBM also plans to roll out new updates to the OpenWhisk user experience to make it easier for developers, including step-by-step workflows, new wizards to configure third-party services and feeds, and a new editor to manage sequences of actions, said Andrew Hately, CTO of IBM Cloud Architecture.
Node-RED  is IBM's open-source IoT tool for creating event-driven applications. It enables developers to start prototyping their ideas without having to first write code. Node-RED can invoke triggers and actions within OpenWhisk, giving apps access to Watson analytics, the IBM IoT platform and a host of other Bluemix services.
Hately said IBM has been working to make OpenWhisk more intuitive for people developing in whatever programming language they want so they can benefit from the event-driven, serverless style of development.
"A lot of this is just continuing the drumbeat of making this more consumable to developers working in the polyglot, language-of-choice-style of development," he said.
With that in mind, IBM has continued with its first-class support of Node.js because of its popularity for IoT and device developers, Hately said.
"On the Node side we tie into our Node-RED platform," he said. "This is all about taking multiple open technologies that are getting large developer communities and continuing to enhance them and better integrate them. IoT is probably the biggest example of people wanting to do very, very lean, message-based integrations. "
"Within the node community, we have a very large contingent of Node.js users," said Todd Moore, vice president of Open Technology at IBM. "And we knew we could make things much easier for them. We see Node as one of the dominant languages within Bluemix these days. More than half of what we see deployed [on Bluemix] is using Node. "

 

 22 

Eclipse Updates Four IoT Projects, Launches a New One

The Eclipse Foundation announced new releases of four open-source IoT projects to accelerate IoT solution development.
The Eclipse Foundation , which has been leading an effort to develop open-source technologies for Internet of things application development , announced that the Eclipse Internet of Things (IoT) Working Group has delivered new releases of four open-source IoT projects the group initiated over a year ago.
The four projects, hosted at the Eclipse Foundation, are Eclipse Kura 2.0, Eclipse Paho 1.2, Eclipse SmartHome 0.8 and Eclipse OM2M 1.0. These projects are helping developers rapidly create new IoT solutions based on open source and open standards.
"We are certain that the Internet of Things will only be successful if it is built on open technologies," Eclipse Foundation Executive Director Mike Milinkovich said. "Our goal at Eclipse is to ensure that there is a vendor-neutral open source community to provide those technologies. "
Eclipse IoT is an open-source community that provides the core technologies developers need to build IoT solutions. The community is composed of more than 200 contributors working on 24 projects. These projects are made up of over 2 million lines of code and have been downloaded over 500,000 times, Eclipse officials said.
Moreover, the Eclipse IoT Working Group includes 30 member companies that collaborate to provide software building blocks in the form of open-source implementations of the standards, services and frameworks that enable an open Internet of things.
In addition to updating four of its existing IoT projects, Eclipse also proposed a new one. Eclipse Kapua is an open-source project proposal from Eurotech to create a modular integration platform for IoT devices and smart sensors that aims to bridge operation technology with information technology, Milinkovich said.
Eclipse Kapua focuses on managing edge IoT nodes, including their connectivity, configuration and application life cycle. It also allows aggregation of real-time data streams from the edge, either archiving them or routing them toward enterprise IT systems and applications.
"As organizations continue to implement IoT solutions, they are increasingly turning to Eclipse IoT for open-source technologies to implement these solutions," Ian Skerrett, vice president of marketing at the Eclipse Foundation, told eWEEK. "For instance, Eclipse Paho has become the default implementation for developers using MQTT [formerly MQ Telemetry Transport], and Eclipse Kura significantly reduces the costs and complexity of implementing an IoT gateway. It is clear open source will be a major force in the Internet of things and Eclipse IoT has become significant source of open-source technology for IoT. "
Eclipse Paho provides open-source client implementations of the MQTT and MQTT-SN messaging protocols. The new Paho 1.2 release includes updates to existing Java, Python, JavaScript, C,. NET, Android and Embedded C/C++ client libraries. Improvements in the new version include automatic reconnect and offline buffering functionality for the C, Java and Android clients; WebSocket support for the Java and Python clients; and a new Go Client, which is a component for Windows, Mac OS X, Linux and FreeBSD. Paho 1.2 is now available.

 

 23 

Tesla Autopilot Crash Under NHTSA Investigation

The National Highway Traffic Safety Administration has opened an inquiry into the autopilot system in Tesla's Model S, following the death of a driver who was using the system.
In a statement posted on the Tesla Motors website on June 30, the company acknowledged the inquiry and characterized the incident as "the first known fatality in just over 130 million miles where Autopilot was activated. "
The NHTSA said in a statement Tesla had alerted the agency to the crash, which occurred on May 7 in Williston, Fla.
The Levy Journal Online , which covers Levy County, Fla., where the crash occurred, described the accident based on an account provided by the Florida Highway Patrol. A tractor-trailer was  traveling west on US 27A and made a left turn onto NE 140 Court as the Tesla driver was heading in the opposite direction. The Tesla passed underneath the 18-wheeler and its roof collided with the truck. It then continued along the road before striking two fences and a utility pole.
"Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied," Tesla said in its statement. "The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S. "
The failure of Tesla's computer vision system to distinguish the truck from the similarly colored sky appears to have been compounded by radar code designed to reduce false positives during automated braking. Asked on Twitter why the Tesla's radar didn't detect what its cameras missed, CEO Elon Musk responded , "Radar tunes out what looks like an overhead road sign to avoid false braking events. "
The driver of the Model S, identified in media reports as 40-year-old Joshua D. Brown from Canton, Ohio, died on the scene.
The driver of the truck, 62-year-old Frank Baressi, told the Associated Press that Brown was "playing Harry Potter on the TV screen" at the time of the crash.
A spokesperson for the Florida Highway Patrol did not immediately respond to a request to confirm details about the accident.
In its June 30 statement, Tesla said drivers who engage Autopilot are warned to keep both hands on the wheel at all times. Autopilot, despite its name, is intended as an assistive feature rather than an alternative to manual control.
The incident has stoked doubts about the viability of self-driving cars and the maturity of Tesla's technology. Clearly, a computer vision system that cannot separate truck from sky in certain light conditions could use further improvement. It was unclear at press time whether Tesla will face any liability claims related to its code or sensing hardware.
However, Tesla insisted in its statement that, when Autopilot is used under human supervision, "the data is unequivocal that Autopilot reduces driver workload and results in a statistically significant improvement in safety when compared to purely manual driving. "
In April, at an event in Norway, Musk said, "The probability of having an accident is 50% lower if you have Autopilot on," according to Electrek.
That may be, but data isn't the only consideration. When human lives are at stake, perception and emotion come into play. Automated driving systems will have to be demonstrably better than human drivers before people trust them with their lives.
Yet, perfection is too much to expect from autopilot systems. Machines fail, and failible people are likely to remain in the loop. In aviation, automation is common and has prompted concerns that it degrades the skills pilots need when intervention is called for. If the same holds true for cars with autopilot systems, we can expect to become worse drivers, less able to respond to emergencies, even as our autopilot systems reduce fatalities overall.
There may be no getting around the fact that, given current vehicle designs, driving down a highway at high speed entails some degree of risk, whether a person or a computer is at the wheel.

 

 24 

HTC continues bitter struggle; HTC 10 flagship on course to sell only 1m units all year

HTC has been struggling. It’s been struggling for so long that we’d be hard pressed to remember a time when it wasn’t. And despite the company’s recent upswing in the critical reception of its products, a new market report shows HTC may continue to struggle until its bitter end.
The HTC 10 is the company’s latest flagship, coming out earlier this year. Overall it’s a great device that stands up well to the competition , albeit a bit overpriced. Unfortunately for HTC, it looks like its new device isn’t selling at all well, with a report from Trendforce, saying only around 1 million units will be sold this year. For comparison, Samsung’s flagship, the also-excellent Galaxy S7 is selling in the tens of millions.
What’s worse is that HTC’s overall phone market is seeing a steep decline year over year. According to the same report, HTC is on course to sell fewer than 14 million phones this year, a 27% drop compared to 2015. This isn't even helped by the reported Nexus handsets that HTC is producing.
Last quarter, HTC announced a massive 64% drop in revenue , with the company posting its 4 th consecutive loss.
In the meantime, HTC has seen great success with its VR headset , the Vive. In fact, the company even spun off the Vive business, to make sure it’s protected from the dramatic events going on in its smartphone sector.
But if fans are looking at the Vive or its future incarnations to save HTC as a whole they might be in for a shock. As this piece from our friends at Ars Technica points out, HTC owns almost nothing of the technology that’s powering the Vive. And Valve, who owns that know-how is looking to spread the knowledge with other manufacturers. In other words, HTC’s days as a successful VR HMD company may also be numbered.
Whatever happens next, it’s clear that HTC needs a big win to prop up the company. But it’s not clear it can deliver one.
Source: Trendforce via: Charles Arthur

 

 25 

LG Expands the X Series with the LG X5 and LG X Skin

LG launched the X series last month and revealed some models that also came with the X-Men theme , as each of the smartphones promoted one character from the "X-Men: Apocalypse" movie. The X series already contains models like the X Power, X Mach, X Style, and X Max, with each handset focusing on one special feature. For instance, the X Power comes with a larger battery compared to the others.
Two new models have been added to the X line today, and one of them is the X5. The smartphone comes with a 5.5-inch HD display, the same as the X Screen. It has a 1.3GHz quad-core processor and provides 2GB of RAM, along with 16GB of internal memory and the option to expand capacity to 32GB with a microSD card.
The smartphone supports two SIM cards and measures 151.6×76.9×7.2mm. LG X5 weighs 133g and packs a 2,800mAh battery. LG X Power has the largest battery in the series, reaching 4,100mAh.
LG X5 sports a rear camera of 13MP as well as a 5MP front unit and runs Android Marshmallow 6.0 out of the box.
The second smartphone that LG has launched today is the X Skin, which has a smaller display of 5 inches and 720p resolution. It comes with 1.5GB of RAM and provides 16GB of internal memory, expandable to 32GB using a microSD card.
Processor capacity is the same as on the LG X5, but this model has a smaller battery of 2,100mAh. Rear camera capacity reaches 8MP while a 5MP unit can be found on the front. Both smartphones support 4G LTE connectivity and come in black and white colors, priced at $173 in Korea. LG X Skin in gold and titanium colors sells for $200.

 

 26 

Microsoft has permanently cut the price of its Surface 3 Docking Station in the UK by 40%

Back in April, Microsoft slashed 40% off the price of its Surface 3 Docking Station in the UK in a 'limited time offer' that was originally supposed to end on June 30. But as with so many of Microsoft's promotional deals in the UK, terms like 'limited time offer' and 'special offer' have very little meaning.
The company has now extended that discount by a further six months, meaning that this isn't so much a short-term deal, but effectively a permanent price cut that will continue until stocks are depleted.
Given that Microsoft has already said that Surface 3 production will end this year , it's inconceivable that the Docking Station will return to its original price of £164.99 when this latest promotional period ends on January 13, 2017.
The Surface 3 Docking Station is now priced at £98.99 - but curiously, the store listing for the product is also promoting a £50 discount on the Surface 3 Wi-Fi tablets...
...a discount that doesn't appear to exist, as both Wi-Fi models are currently listed on the Microsoft Store at full price:
But if you're keen to get a Surface 3 in the UK, you can get it elsewhere for less. Currys PC World has moved the Windows 10 tablet to its 'clearance' stock, cutting £70 off its price.
Source: Microsoft Store

 

 27 

Data Generation Gap: Younger IT Workers Believe The Hype

IT has been experiencing a bit of a generation gap between so-called digital natives, who grew up with iPhones and cloud computing, and older workers who didn't. Now, a new study from IDG Enterprise says younger workers see a lot more opportunity in big data than their older counterpars do.
Specifically, workers aged 18 to 34 are "vastly more likely" than other age groups to strongly agree on the transformative potential of big data and their companies' readiness to take advantage of it, according to the IDG Enterprise Data & Analytics Survey 2016 .
[Your job is probably secure. For now. Read Robots, AI Won't Destroy Jobs Yet.]
IDG Enterprise surveyed 724 IT decision-makers of all ages involved in big data initiatives. The report does not reveal the numbers of respondents per age group.
The report said respondents aged 55 and older are significantly more likely than those in other age groups to disagree that big data will open up new revenue opportunities and/or lines of business in the near future. These respondents are also more uncertain than other age groups about whether their big data ecosystem will change in the next 12 months, and how it will change.
In its report, IDG Enterprise said aged-based differences about the value of data-driven projects may be attributable to "younger employees being more comfortable with the latest technologies and more inured to the inevitability of technology-driven disruption. On the other hand, older respondents have seen many supposedly transformational technologies come and go throughout their careers. "
In other words, perhaps older respondents are more seasoned and cynical, having already been through multiple cycles of tech hype. 
Yet, technologies such as the Internet of Things (IoT) and big data analytics are driving big investments by enterprises, according to the report.
More than half of respondents (53%) said their companies are currently implementing, or planning to implement, data-driven projects within the next 12 months. The report defines data-driven projects as those undertaken with the goal of generating greater value from existing data.
Of the projects underway or in the planning stages, 26% of respondents said they are already implemented, 14% said they are in the process of implementation or testing, 13% said they're in planning implementation in the next 12 months, 8% said they are considering a data-driven project, and 8% said they're likely to pursue one but are still struggling to find the right strategy or solutions.
How does your company stack up compared with these results? Do you believe there is an age gap when it comes to understanding the value of data-driven implementations? Tell us all about it in the comments section below.

 

 28 

IBM Opens Blockchain-Oriented, Bluemix Garage In NYC

In the digital economy, blockchain transactions are believed likely to replace many existing electronic transactions and provide a hard–to-crack record of the event that is captured in multiple locations.
Anticipating a new generation of blockchain-based financial systems, IBM is opening a Bluemix Garage in New York City in hopes of attracting future blockchain developers to its Bluemix cloud.
Blockchain was the innovation captured in the implementation of Bitcoin, where the execution of an electronic transaction also became its accounting record. As one transaction follows another, a chain of such records is built up on multiple computers that can be reconstructed by different participants in the chain. The process provides a distributed general ledger that's and hard to tamper with from the outside.
A whitepaper produced by the IBM Institute for Business Value cites the benefits that blockchain systems will bring to their users, including improving the security and integrity of transactions while reducing the friction involved in completing them. The paper goes so far as to say that in the future, blockchain transactions will allow organizations to reorganize into institutions capable of more fluid changes and exchanges with other organizations:
Blockchain technology has the potential to obviate intractable inhibitors across industries. As frictions fall, a new science of organization emerges, and the way we structure industries and enterprises will take novel shape.
An implementer of blockchain could produce new mobile banking and wealth management applications.
Mizuho Financial Group in Tokyo recently announced a pilot project to test blockchain as a means of virtual currency settlements. The pilot came out of the IBM garage in Tokyo. It's exploring how payments in different currencies can be quickly settled, potentially leading to the launch of new financial services, according to IBM's June 28 announcement.
The Mizuho project makes use of Hyperledger open source code. Hyperledger is a blockchain-supporting project hosted by the Linux Foundation. Blockchain is now also the topic of developer conferences, such as the Fintech conference in Washington, D. C., on Aug. 2.
IBM emphasizes application development skills, web development, transaction systems, use of analytics, cognitive computing, and advanced IBM systems such as Watson at its garage facilities.
The Bluemix Garage will be established at 315 Hudson St. in SoHo at a campus run by Galvanize, a technology education service.
The area is already the home of many of the city's technology startups. Galvanize advertises that its courses will turn out a data scientist or financial technology expert in 12 months of full-time coursework. IBM opened a Bluemix Garage in San Francisco last year at a building occupied by Galvanize. Big Blue has also  added Bluemix garages in Toronto, Tokyo, London, Nice, and Singapore.
Company spokesmen have said in the past they plan to open one in Melbourne, Australia, as well.
[Read IBM Opens Fourth Bluemix Garage in France .]
The garage will also include access to consultants with expertise in IBM Design Thinking , IBM's methodology for moving from creative idea through iterative product design and into production.
The garage is also a place where developers can test drive the Bluemix cloud. They have access to tools, open source code, and IBM software. IBM would be happy to see more developer activity on Bluemix at a location close to its Watson AI system headquarters in New York.
It's also been a partner with the city in encouraging startups to use the city's Digital NYC platform , where infant companies can get connection services and a chance to collaborate with 8,000 other startups already using it.
At its New York garage, IBM wants "to advance the science of blockchain, helping to remove complexity and make it more accessible and open. Financial services, supply chains, IoT, risk management, digital rights management and healthcare are some of the areas that are poised for dramatic change using blockchain," according to this week's announcement.

 

 29 

Hortonworks Commits To Microsoft's Azure Cloud

Hadoop distributor Hortonworks used its Hadoop Summit in San Jose this week to get a little closer to one of its top cloud technology partners -- Microsoft.
The big data company announced that Microsoft Azure HDInsight is its Premier partner for Connected Data Platforms -- Hortonworks Data Platform for data at rest and Hortonworks DataFlow for data in motion.
"Azure HDInsight as our Premier Connected Data Platforms cloud solution gives customers flexibility to future proof their architecture as more workloads move to the cloud," Hortonworks CEO Rob Bearden wrote  in a prepared statement  released June 28.
The closer partnership with Microsoft was one of several announcements from Hortonworks during the Hadoop Summit this week. The company also updated its Hortonworks Data Platform package with features for enterprise customers, introduced a new precision medicine consortium to explore a next-generation open source platform for genomics research, and struck a partnership with AtScale to advance business intelligence on Hadoop.
[Another Hadoop distributor, MapR, also recently released an update. Read MapR Spyglass Initiative Eases Big Data Management.]
Hortonworks Data Platform (HDP) 2.5 is the newest version. The company says it offers enterprise-ready features , including an integration of comprehensive security and trusted data governance that both leverage Apache Atlas and Apache Ranger. The company has also included a host of other open source big data technologies to make the package an enterprise-grade experience.
The platform now also offers the web-based data science notebook, Apache Zeppelin, for interactive data analytics and the creation of interactive documents with SQL, Scala, Python, and other tools.
The inclusion of the most recent version of Apache Ambari gives enterprises support for planning, installing, and securely configuring HDP, and for performing ongoing maintenance and management of the systems. Also, a new role-based access control model now lets administrators provide different users with different functional access to the cluster.
To improve developer productivity, the company has added Apache Phoenix Query Server to enable more choices for development languages to access data stored within HBase. Apache Storm now allows for large-scale deployments for real-time stream processing. The new version also includes new connectors for search and NoSQL databases, according to Hortonworks.
Hortonworks also announced a new partnership with AtScale , offering that startup's technology for enabling SQL-type queries against data resident in Hadoop.
"From day one, our goal has been to make BI and Hadoop work in harmony by erasing the friction associated with moving data and forcing end users to learn new BI tools," wrote AtScale CEO Dave Mariani in a prepared statement. AtScale's technology will be available via Hortonworks in the third quarter, the companies said.
Hortonworks also announced its own plan to participate in the precision medicine space with the formation of a new consortium "to define and develop and open source genomics platform to accelerate genomics based precision medicine in research and clinical care. "
In addition to Hortonworks, initial members of this consortium include Arizona State University, Baylor College of Medicine, Booz Allen Hamilton, Mayo Clinic, OneOme, and Yale New Haven Health.
Hortonworks said that this consortium will take on the task of defining the requirements and addressing the limitations of current technology for storing massive volumes of genomic information, analyzing it, and querying it at scale in real time.
Hortonworks noted the consortium will apply " Design Thinking " to this problem.
"Unleashing the power of data through open community and collaboration is the right approach to solve a complex problem like precision medicine," DJ Patil, chief data scientist, White House Office of Science and Technology Policy, wrote in a prepared statement. "Initiatives like this one will break data silos and share data in an open platform across industries to speed genomics-based research and ultimately save lives. "

 

 30 

Microsoft to introduce more flexible Enterprise Advantage licensing in 2017

Microsoft has detailed changes to its volume licensing that aims to let organisations mix perpetual and subscription-based software with cloud services in a way that best makes sense to the customer.
The firm has also officially retired Select Plus licensing and doubled the minimum requirement for the current Enterprise Agreement licensing to 500 users or devices.
Microsoft said that it is set to introduce a new Enterprise Advantage plan sometime in 2017 as an extension of the Microsoft Products and Services Agreement (MPSA) unveiled in January. The move is part of a wider effort by the company to simplify its software licensing.
In fact, Microsoft appears to be positioning Enterprise Advantage as the eventual replacement for the current Enterprise Agreement volume licensing scheme. The firm claimed that the new plan will offer the same Enterprise Agreement benefits, including products, prices and coverage, but with greater flexibility to align to a customer's purchasing structure and needs.
This flexibility includes the freedom to mix perpetual and subscription licensed on-premise software with any of Microsoft's cloud services like Office 365 and Azure, and to increase or decrease subscriptions and services as required, in order to meet ever-changing business needs.
"At launch, Enterprise Advantage on MPSA will be available for all commercial customers in markets where the MPSA is available, but will be the optimal choice for most customers with up to 2,400 users or devices," said Richard Smith, Microsoft's general manager for worldwide licensing and pricing, writing on the firm's Volume Licensing Blog .
"We anticipate introducing similar offerings called Government Advantage and Education Advantage later in 2017, and will continue to build additional functionality over time to support all of our customers with modern licensing choices. "
Meanwhile, Microsoft said that the planned retirement of the Select Plus licensing scheme has now come into effect. Customers with existing agreements will not be able to place orders through this plan from their next anniversary date, and are encouraged to move to MPSA licensing.
Also now in effect is a change to the minimum commitment for customers on the Enterprise Agreement volume licensing scheme. Organisations signing up for new Enterprise Enrolments or Enterprise Subscription Enrolments must now sign up for a minimum of 500 users or devices, up from 250.

 

 31 

MIT Develops New "Swarm" Multi-Core CPU Architecture for Higher Speeds

Swarm is available as a 64-core CPU, which, in theory, should be 64 times faster than a normal CPU. Unfortunately, like most multi-core CPUs, it's not.
The problem relies on the fact that applications that run on multi-core CPUs need to have their source code adapted, split into tasks, and then have tasks classified based on priorities to avoid data overwrite issues. This operation is time-consuming and relies on human labor, which is often imperfect.
The new Swarm system comes with special circuitry that's responsible for classifying tasks using timestamps and running the tasks in parallel, starting with the highest-priority.
Swarm avoids data storage conflicts when two or more tasks want to write data to the same memory spot by also including special circuitry that backs up memory data and allows the highest-priority task to run first, and then restoring the data for the lower-priority task.
During tests, MIT's Swarm achieved computation speed-ups between 3 and 18 times compared to classic multi-core CPU programs. The programs that ran on the Swarm architecture also required a tenth, or less, of code modifications when compared to the changes needed to adapt software for classic multi-core CPUs.
MIT says the new Swarm architecture has even achieved a 75 times computation result for an app that couldn't be ported to the classic multi-core platform.
Swarm's secret lies in using graphs for classifying task prioritization and then running the parallel computing operations. All of this is automated and excludes the human factor from the process.
"Multicore systems are really hard to program," says Daniel Sanchez, Swarm project lead and an assistant professor in MIT’s Department of Electrical Engineering and Computer Science. "You have to explicitly divide the work that you’re doing into tasks, and then you need to enforce some synchronization between tasks accessing shared data. What this architecture does, essentially, is to remove all sorts of explicit synchronization, to make parallel programming much easier. "

 

 32 

More than 2,000 police data breaches in 4.5 years, report reveals

Police forces across the UK are still involved in 10 data breaches a week, according to a report by civil liberties campaign group Big Brother Watch.
The Safe in Police Hands? report, based on freedom of information (FOI) requests, reveals that between June 2011 and December 2015 police officers and staff were responsible for at least 2,315 data breaches.
The greatest number of data breaches were by the West Midlands Police (488), followed by the Surrey Police (202), Humberside Police (168), and Avon and Somerset Police (163).
More than 800 employees accessed personal information for no policing purpose, while data was shared inappropriately or without authorisation almost 900 times, the report claims.   
Specific incidents show officers misusing their access to information for financial gain and passing sensitive information to members of organised crime groups, the report said.
In more than half the cases, the report said no formal disciplinary action was taken, while a written or verbal warning was issued in only 11% of cases.
However, 13% of cases resulted in either a resignation or dismissal and 3% resulted in a criminal conviction or a caution.
Considering data is now the driving force of society, Big Brother Watch said any breach can pose a threat to our privacy and security. 
“Abusing access to private and sensitive information is not acceptable by anyone, but particularly by those charged with keeping us safe and upholding the law,” the group said in a statement.
As a result of the government’s digital by default policy, the report notes that the levels of data the police handle will increase.
“While there have been improvements in how forces ensure data is handled correctly, this report reveals there is still room for improvement. Forces must look closely at the controls in place to prevent misuse and abuse,” the report said.
With the potential introduction of internet connection records (ICRs), as outlined in the Investigatory Powers Bill , the report said the police will be able to access data which will offer the deepest insight possible into the personal lives of all UK citizens.
Big Brother Watch said the breach of such detailed information would be over and above the extent of the breaches outlined in the report.

 

 33 

BleachBit 1.12 Free System Cleaner Brings Support for Ubuntu 16.04, Fedora 24

BleachBit 1.12 comes as a replacement for BleachBit 1.10, which was announced at the beginning of the year, and it looks like it has been in development since April under the BleachBit 1.11.x umbrella. During its three-month development cycle, the software has received a total of three Beta builds that have brought many changes, improvements, and a handful of new features.
BleachBit is the tool you need if you want to keep your GNU/Linux or Windows clean and free of junk files left by various applications. The latest version, BleachBit 1.12, includes updates for the popular Mozilla Firefox and Google Chrome web browsers, a few under-the-hood improvements, and many changes to both Linux and Windows ports.
Probably the most important feature of them all is the ability to install the software on the recently released Ubuntu 16.04 LTS (Xenial Xerus) and Fedora 24 Linux operating systems, as the developer has provided users with both DEB and RPM binary packages, making the installation easier and painless.
Additionally, BleachBit's cleaning engine was greatly improved for many popular open-source applications, including EasyTAG audio tag editor, Epiphany web browser, Evolution groupware client, Rhythmbox music player, and Transmission BitTorrent client. Also, there's better support for cleaning junk files from the GNOME desktop.
Last but not least, the KDE cleaner for the KDE 4 desktop environment has been updated to work on openSUSE Linux, the X11 and Thumbnails cleaners have received improvements as well, iBus Pinyin has been whitelisted, and the software has switched to use the GIO VFS API instead of the deprecated GnomeVFS for accessing file systems.
Of course, there are many changes for improving the usability of the software on Windows systems, so we recommend taking a look at the changelog attached at the end of the article for more details. In the meantime, you can download BleachBit 1.12 for GNU/Linux and Microsoft Windows operating system right now via our website.

 

 34 

'Sneak peek' at Xbox avatars with wheelchairs hints at wider avatar upgrades on the way

There are already plenty of customization options for Xbox avatars, but one notable omission for some gamers will soon be addressed - and it looks like Microsoft may also be preparing to upgrade its avatars for all Xbox users.
Yesterday, Xbox chief Phil Spencer responded to comments on Twitter about the possibility of adding wheelchairs as an option for avatar customization. He confirmed that Microsoft was already looking into it, and that that addition is "not far off".
A day after Spencer's tweet, his colleague, Mike Ybarra, offered a 'sneak peek' at what the avatars will look like with wheelchairs:
And as you can see from those images, the addition of wheelchair options appear to be just one change on the way for Xbox avatars. Compare with the image at the top of this article, showing what avatars currently look like on Xbox Live, and it seems clear that Microsoft is preparing to upgrade its avatars with more detailed versions.
We still don't know exactly when these changes will come into effect, but we do know that Microsoft is preparing a major update for the Xbox One this summer , so there's a good chance the avatar improvements will arrive around the same time.
Source: @XboxQwik via VideoGamer

 

 35 

Uber App Update To Track Driver Behavior

Uber soon will know the answer to a question raised by bumper-stickers on many vehicles traveling America's highways: "How am I driving? "
In a forthcoming update to the app used by Uber drivers, the transportation platform company has implemented safety telematics that measure the braking, acceleration, and speed of the vehicles used by its drivers.
The update also adds notifications designed to promote better driving, like reminders to take breaks and to mount the phone used for the driver app on the dashboard rather than keeping it in-hand. It adds daily driving reports that compare driving patterns to those of other drivers.
The update coincides with the approach of the Fourth of July in the US, a holiday consistently marred by driving fatalities. Uber says its driver app improvements can help reduce driving risks.
"Today too many people are hurt or killed on the roads," wrote Uber chief security officer Joe Sullivan and MADD national president Colleen Sheehey-Church in a blog post on Wednesday. "While alcohol is the leading cause of traffic crashes, there are other behaviors that can put people at risk -- for instance if drivers are on drugs, haven't gotten enough sleep or are distracted. "
Data can help Uber drivers operate more safely. But it also helps Uber defend itself against competitors that would see the company hobbled by regulation, and against critics who claim the company's business practices are unsafe.
To counter its detractors, the company published data showing a correlation between declining DUI arrests and Uber usage. The Atlanta police department, the company said, reports that arrests for drunk driving fell from 2,243 to 1,535 between January 1, 2010 and January 1, 2016 -- a 32% decline. During that period, Uber pickups surged, suggesting a possible correlation.
Uber is careful to avoid claiming credit for the DUI arrest decline, because correlation is not causation. But Sullivan and Sheehey-Church said in their blog post that Uber riders see a link between the service and reduced drunk driving. Certainly there's some interaction there.
With the addition of telematic data about driver behavior, Uber should be able to make an even stronger case about safety of its service, particularly compared to other transportation options that may not have drivers under comparable surveillance. Access to a broader set of data about how its drivers actually drive will allow the company to identify risky drivers and to correlate rider complaints with real measurements of vehicle braking, acceleration, and speed.
Uber began tracking driver behavior in Houston last November, according to  The Wall Street Journal. The company says it plans to introduce the new telematics features in its driver app in 11 cities over the next few weeks.
It is, however, not the first business to collect information about its drivers. Fleet management companies have been collecting telematic data for years. More recently, insurance companies like Allstate have begun offering a rate discount for drivers who accept telematic monitoring.
In the years ahead, such technological oversight is likely to become difficult to avoid, because theoretical privacy risks will have trouble competing with the prospect of saved lives.
[Read Google, Uber, Ford Form Self-Driving Car Coalition .]
Studies indicate that telematics lead to better driving. The SAMOVAR research program conducted in Europe, for example, found that simply recording vehicle data led to a 28.1% decrease in the accident rate over a 12-month period, a result attributed to driver awareness that behavior can be checked.
There's a potential downside for Uber, however. By amplifying its capacity for driver oversight, Uber runs the risk of making its contract drivers look like employees to government regulators. Uber recently settled a challenge to its classification of drivers as independent contractors, thereby avoiding a judicial ruling on the issue.
But that does not preclude future litigation, and being able to exercise control over how work is done -- how drivers drive -- is one of the factors  the IRS considers when evaluating whether a worker is an employee or an independent contractor.

 

 36 

Industrialised cyber crime disrupting business, report reveals

Many businesses are ill-equipped to deal with the threats posed by profit-oriented and highly organised cyber criminal enterprises , a report has revealed.
Only a fifth of IT decision makers in large multinational corporations are confident that their organisation is fully prepared to deal with cyber crime, according to the report by BT and KPMG .
The Taking the Offensive – Working together to disrupt digital crime report is based on interviews with directors of IT, resilience and business operations at large firms in the UK, US, Singapore, India and Australia.
The vast majority of companies feel constrained by regulation, available resources and a dependence on third parties when responding to cyber attacks, the study found.
Although awareness of the threat has never been higher, a majority of businesses do not comprehend the methods and motivations of the attackers or fully understand the scale of the threat, the report said.
While 94% of IT decision makers are aware that criminal entrepreneurs are blackmailing and bribing employees to gain access to organisations, 47% admit that they do not have a strategy in place to prevent it.
The report revealed that 97% of respondents have experienced a cyber attack, with half of them reporting an increase in the past two years. Some 89% expressed concern about an assault by organised crime groups, with similar percentages seeing terrorist action and state-sponsored hackers as a real danger.
At the same time, 91% of respondents believe they face obstacles in defending against digital attack, with many citing regulatory obstacles, and 44% are concerned about the dependence on third parties for aspects of their response.

 

 37 

Defender OS Rebased on Fedora 24, Gets Cinnamon 3.0.6 & Linux Kernel 4.6.3

Rebased on the recently released Fedora 24 Linux operating system, Exton|Defender SRS is the distribution of choice for those who want a Live DVD Linux system for system administration and repairing tasks in style, as the latest version, build 160705, comes with the beautiful and modern Cinnamon 3.0.6 desktop environment.
But that's not all because today's release of Exton|Defender SRS is powered by the latest and most advanced Linux kernel available to date, version 4.6.3, which also landed in the main Fedora 24 software repositories last week. Linux kernel 4.6.3 brings great support for new hardware devices.
"I’ve made a new version of Exton|Defender 64 bit. Now based on Fedora 24, released 160621. It uses Cinnamon 3.0.6 and kernel 4.6.3," said Arne Exton in today's announcement. "Exton|Defender aims to provide an easy way to carry out admin tasks on your computer, such as creating and editing the hard disk partitions. "
Another great thing about the Exton|Defender SRS operating system is that it comes with some of the greatest open-source utilities for system administration tasks. Among these, we can mention GParted, PartImage, Shred, Sfdisk, Rsync, GNU ddrescue, NTFS-3G, FSArchiver, TestDisk, Emacs, Safecopy, and Midnight Commander.
Additionally, users will find popular desktop applications like the Google Chrome (good for watching Netflix movies) and Mozilla Firefox web browsers, Mozilla Thunderbird email and news client, LibreOffice office suite, GIMP image editor, and tools like NetworkManager, Samba, Java (JDK 7u9), and Java Runtime Environment (JRE). Study the entire list of packages.
Arne Exton has also added the kernel headers and many compilation tools that allow users to install other software projects from sources. Download Exton|Defender SRS Build 160705 right now via our website, but please try to keep in mind that it only works on 64-bit computers. For 32-bit PCs, there's a version based on Mageia 5.

 

 38 

Samsung Expecting Best Quarter in Two Years Thanks to Galaxy S7 Sales

The company is believed to have experienced its best quarter in two years, as analysts estimate that Samsung's mobile division contributed to a 13% increase in operating profit compared to the same quarter of 2015.
Reuters conducted a survey among 16 analysts and reports that Samsung is expected to see an operating profit of $6.8 billion in April-June, the highest in two years, since Q1 2014.
The company relies strongly on sales of its Galaxy S7 flagship, which took the market by storm and remains one of the best smartphones released this year.
The report also indicates that Galaxy S7 and Galaxy S7 edge made up $3.7 billion or 54% of Samsung's operating profit in Q2 2016. The company shipped 16 million units of the two phones in April-June while the dual-edge variant managed to clearly outsell the flat-screen Galaxy S7.
The report also reveals that, before the start of this year, Samsung's smartphone division saw strong competition from rivals in both the higher end of the market and the budget segment. The South Korean giant's handsets competed against Apple's iPhones and mid-range devices that Huawei released to the market.
An analyst has revealed for Reuters that operating profit margins for mobile phone business are expected to drop in the following quarters, as the Galaxy S7 will see less popularity on the market, but operating profit will still grow on an annual basis.
Samsung is expected to unveil the Galaxy Note 7 in August, and some rumors suggest that it might debut a Galaxy S7 edge+ variant too. In this case, the company's profits will surely keep an ascending trend, but they will still be affected by Apple's release of the iPhone 7 in the coming months.

 

 39 

Debian 8 Gets New Kernel Update, Five Vulnerabilities and a Regression Patched

Debian Security Advisory DSA-3616-1 was published on July 4, 2016, and it looks like, this time, the kernel update patches the long-term supported Linux 3.16 kernel packages of the current stable Debian GNU/Linux release, codenamed Jessie, to fix a total of 5 vulnerabilities that have been discovered upstream.
Additionally, the Debian kernel developers have patched a regression that was introduced during last week's major kernel update in the ebtables facility. Therefore, all users of the Debian GNU/Linux 8 "Jessie" operating system are urged to update from kernel 3.16.7-ckt25-2+deb8u2 to 3.16.7-ckt25-2+deb8u3 as soon as possible.
"Several vulnerabilities have been discovered in the Linux kernel that may lead to a privilege escalation, denial of service or information leaks," reads the security notice. "We recommend that you upgrade your linux packages. Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/. "
As noted above by Debian Project's Salvatore Bonaccorso, those who use the Debian GNU/Linux 8 "Jessie" operating system on their personal computers or servers are urged to update their Linux kernel packages immediately for the vulnerabilities mentioned in the security notice to be fixed, by using either the APT (Advanced Package Tool) package manager or a GUI utility.
The new kernel version, 3.16.7-ckt25-2+deb8u3, is now live in the main Debian GNU/Linux 8 "Jessie" software repositories. As with any kernel update, please don't forget to reboot your computer after applying it, and always remember to keep your Debian GNU/Linux installation up to date with the latest security patches by checking for updates regularly.

 

 40 

Monitor your CPU temperature with Core Temp

Core Temp is a powerful CPU temperature monitor which has been helping users watch their hardware since 2006.
The project seemed to have faded away in the past few years, but a series of updates in recent months has seen it roar back to life.
Launching the program displays useful information about your CPU, including model, platform, frequency, voltage, and current temperature and utilization of each core.
Core Temp notes and displays the minimum and maximum temperatures for each core. Leave the program running, use your PC as normal, and you’ll get a feel for what effect they’re having on your hardware.
If you need more detail, it's possible to log temperatures to a CSV file.
Core Temp also provides a versatile "Overheat Protection" system which can display a warning, run a program, or even sleep or shut down your PC if the CPU is too hot.
Various free plugins add features like the ability to monitor your system’s temperature from other computers, and there are plenty of options and settings to help you get everything working to suit your needs.
This year's updates mean Core Temp works with every current Intel and AMD processor, and a pile of fixes should ensure it runs smoothly.
Core Temp is a free application for Windows XP and later

 

 41 

Identity fraud in UK targets under 30s

New figures reveal a 52 percent rise in young identity fraud victims in the UK. In 2015, just under 24,000 people aged 30 and under were victims of identity fraud. This is up from 15,766 in 2014, and more than double the 11,000 victims in this age bracket in 2010.
The figures from fraud prevention service Cifas -- which is calling for better education about fraud and financial crime -- are released alongside a new short video designed to raise awareness of ID fraud among younger age groups.
"Fraudsters are opportunists", says Simon Dukes, Cifas chief executive. "As banks and lenders have become more adept at detecting false identities, fraudsters have focused on stealing and using genuine people’s details instead. Society, government and industry all have a role in preventing fraud, however our concern is that the lack of awareness about identity fraud is making it even easier for fraudsters to obtain the information they need".
The survey also finds that many young people are unaware they are at risk. Only 34 percent of 18-24 year olds say they learned about online security at school, and 50 percent believe they would never fall for an online scam (compared to the national average across all age groups of 37 percent).
Just 57 percent of 18-24 year olds report thinking about how secure their personal details are online (compared to 73 percent for the population as a whole). They are also less likely to install anti-virus software on their smartphone than the national average (27 percent compared to 37 percent).
Commander Chris Greany of the City of London Police, who is national coordinator for economic crime, says, "We have known for some time that identity fraud has become the engine that drives much of today’s criminality and so it is vitally important that people keep their personal information safe and secure. In the fight against fraud, education is key and it's great that Cifas and its members are taking identity fraud seriously and working together to raise awareness of how the issue is now increasingly affecting young people through the launch of this film".
Cifas has created an online quiz for people to check their fraud risk profile, and you can watch the video below.
Image Credit : Minerva Studio / Shutterstock

 

 42 

Samsung Galaxy J2 (2016) with Smart Glow Notification Ring Leaks in Image

Samsung is expected to feature the Smart Glow on multiple upcoming handsets, but it seems that the company has picked Galaxy J2 (2016) to be the first to have such a LED notification system. So you might want to know that images of the phone’s front and back have surfaced at SamMobile.
The pictures show a gold color variant of the Galaxy J2 (2016) with the Smart Glow ring around the rear camera lens. The ring is set to light up in different colors depending on functions that were previously assigned. Smart Glow can use different colors for contacts, incoming messages or emails.
In addition, the ring can alert you if the phone's battery is getting low, but that requires users to make some initial settings. Smart Glow has the capability to indicate weather information, as it can light in different colors depending on the forecast.
Moreover, it will have a function that reads the user's heart rate. One neat feature that Samsung has included on the Smart Glow is the ability to take selfies with the rear camera. Smart Glow takes only a few seconds to detect the user's face and light up in a certain color when the camera is focused and ready to capture the image.
Smart Glow is certainly an exciting feature, and it will come on the Galaxy J2 (2016) , said to feature a 4.7-inch display and run 1.5GHz Spreadtrum SC8830 quad-core processor. The smartphone is also expected to come with 2GB of RAM, with internal storage set at 16GB. There's also an 8MP rear camera, coupled with a 5MP one on the front. The handset is set to launch next week in India.

 

 43 

Nokia ends the smartphone beta test

The month was April 2012. Almost two years earlier, Microsoft had held a mock funeral for the iPhone with high hopes for Windows Phone 7 Series. Now, Nokia declared that the smartphone beta test was over, launching the Lumia 900, the company's second Windows Phone.
If you think that the timing was a bit off to make such a bold statement, you’re not wrong. Windows Phone 8 – and the new flagship Lumia 920 – was just five months away, but that didn’t stop Nokia from launching this Windows Phone 7.5 beauty.
And a beauty it was. The Lumia 900 was classic Nokia design. Offered in black, white, cyan, and magenta, the buttons and camera strip were a polished chrome. No one could say that Nokia’s polycarbonate devices looked cheap.
Of course, there was a physical camera button as well, something that any longtime Lumia-lover will appreciate.
The chipset was a Snapdragon S2, a single-core 1.4GHz Scorpion alongside an Adreno 205 GPU. That’s nothing by today’s standards, but power up a Lumia 900 today and it’s still blazingly fast. In fact, it’s probably faster than many of today’s flagships.
Of course, it’s much easier to streamline an OS when you leave out basic features, such as the ability to take screenshots or resize tiles. We Windows phone users prefer to call this lack of features "light".
Visually, Windows Phone 7.5 was similar to its successor, Windows Phone 8, with the exception of a black bar on the right side of the screen that went mostly unused. The Settings menu was entirely text-based, the Store had a metro design, and tiles could be pinned and unpinned from the Start screen.
Windows Phone 7.8 - the next and final update for WP7 devices - made the design even more similar to Windows Phone 8 by allowing for resizeable tiles and getting rid of that artsy useless black bar. Of course, while it now looked like Windows Phone 8, it couldn't run WP8 apps. You can think of it in terms of Windows RT 8.1 Update 3 , except for phones.
The biggest difference between Windows Phone 7 and 8 was the kernel. The former was based on Windows CE - the same as all of Microsoft's mobile efforts since the '90s - and the latter was based on the NT kernel. Because of this, no Windows Phone 7 devices received an upgrade to Windows Phone 8.
We can also attribute the lack of upgrades to system requirements. All Windows Phone 7 devices used single-core processors. Windows Phone 8 (until Update 3) exclusively supported three variants of the dual-core Snapdragon S4 Plus chipset.
The camera on the Lumia 900 was to be one of the best that the Windows Phone platform had to offer. Of course, these were the dark ages of April 2012, so Pureview wasn't a thing on Windows Phone yet (again, that would come five months later with the Lumia 920).
The rear camera was 8 megapixels of Carl Zeiss optics. There was no optical image stabilization (OIS). If there was, it would have been Pureview. The front camera was a whopping 1.3MP.
Keep in mind that this was a time before Lumia Camera, or even Nokia Camera. At the time, all you had was the stock Camera app and anything you could find in the Marketplace. I'd show you a screenshot, but of course, Windows Phone 7.5 didn't have that functionality.
Here are some samples:
Sure, by today's standards, the Lumia 900's camera wasn't great. Low-light performance is nothing special, the metering could be better, and some of the photos might seem a bit warm.
The device does have something in common with Lumias that have come since then. It worked with what it had. It's like how you managed to take that surprisingly beautiful photo with your Lumia 520. The hardware might not be as good, but the software does the best that it can.
Windows Phone 7.5 was a delight to use. There was so much that was unique about it, and it felt good. Those 'metro' elements are mostly gone now in Windows 10 Mobile, and to be honest, the OS has been talked about enough.
Let's talk about pre-installed apps, because if you've only been using a Windows phone for two years or so, these will be completely foreign to you. We have apps like Creative Studio, Nokia Music, Nokia Maps, and Nokia Drive. It's also missing some key Microsoft services, such as Skype.
I know what you're thinking, "Creative Studio. I have that. It's just called Lumia Creative Studio now. " This is an entirely different app. The features in Creative Studio are not available in Lumia Creative Studio, and vice versa.
Rather than Color Pop and blur, we have 'face warps' and 'live styles'. You could also import a photo and adjust things like sharpness, color, and exposure - or you could add a filter.
Remember MixRadio? The beloved service is gone now , but before it left us, it was preinstalled on all Lumias; however, before MixRadio, it was called Nokia Music.
At the time, MixRadio wasn't really a thing. Part of Nokia Music was listed as 'mix radio', alongside 'my music', 'create a mix', 'offline', and 'gigs'. Of course, the app doesn't work anymore.
There was also a Music + Videos app, which would later be split into Xbox Music and Xbox Video, and later rebranded to Groove Music and Movies and TV , respectively. Music + Videos came with the friendly Zune logo that we all remember.
Of course, this was 2012. Did you think you could open the app and find all of your purchased music and movies there? You silly fool. This was a time when we had to sync to a PC to do anything like that.
These devices also required a wired connection with a PC to update the OS. This made it easy for me to keep my Lumia 900 on Windows Phone 7.5; today, an OTA update would consistently bug me with a notification to upgrade to 7.8.
The Nokia Lumia 900 was a wonderful device, and to be honest, it still is. Obviously, that can mostly be attributed to the fact that it was never upgraded to a version of the OS that would use more resources, but it's certainly something that Windows phone fans can look back upon with nostalgia.
Of course, when I call it a wonderful device, I mean that it's great to use and play with for a bit. I'm certainly not recommending that you go out and buy one to make it your daily driver.
After all, the selection of apps is terrible. Next time you're complaining about the apps that are available for Windows phones, go and pick up a Windows Phone 7 device.
While the device is certainly a fun one to play with and explore the roots of the platform that we all know and love, Nokia was definitely being hyperbolic (perhaps not on purpose) when declaring that the smartphone beta test is over.
The Lumia 900 came just five months before the Lumia 920. Instead of a 4.3" 480p AMOLED, an 8MP camera with f/2.2 aperture, 16GB storage, and 512MB RAM, we would soon see a 4.3" WXGA (768x1280) AMOLED, an 8MP camera with f/2 aperture and optical image stabilization (Pureview), 32GB of storage, and 1GB RAM.
When considering that perspective and the fact that Nokia clearly had the Lumia 920 deep into the pipeline when the Lumia 900 was announced, the Lumia 900 seems like a throwaway device, rather than the end of some arbitrary beta test.

 

 44 

Video conferencing increases productivity

Video collaboration increases productivity and improves both business and personal relationships, according to video conferencing technology company Lifesize.
The company polled its users and says that 99.2 percent of respondents find video conferencing boosts relationships both in and out of the office. No word on how many people were polled, though.
Also, 91.7 percent of respondents say it is easier to get their point across when they can see the other person on video, and 77.7 percent say their productivity jumped, and their work-life balance improved.
"Frost & Sullivan research findings are in line with the Lifesize survey results. The inherent benefits of video conferencing, continuous enhancements to the user experience and a multitude of other factors will continue to drive growth for visual collaboration solutions", explains Rob Arnold, principal analyst at Frost & Sullivan.
The report points out what we know all too well -- that non-verbal communication is crucial to being able to properly get a message across. Eliminate these details, and you risk your message being perceived completely wrong.
"The latest Frost & Sullivan research finds that the cloud-based video conferencing services market is expecting a compound annual growth rate of 20.7 percent. Companies now understand that it helps employees be more productive and focused in the business environment, says Craig Malloy, Lifesize CEO. "A majority of survey respondents report that video conferencing reduces the time needed to complete a project and that it decreases the likelihood of multitasking. Lifesize’s cloud-based video conferencing and all-in-one collaboration tool helps its more than 3,000 customers to thrive and be successful -- both in and out of the meeting room".
Published under license from ITProPortal.com, a Net Communities Ltd Publication. All rights reserved.
Photo Credit: Rawpixel.com / Shutterstock

 

 45 

The Xiaomi Mi Band 2 is the most disappointing wearable of the year so far

The Xiaomi Mi Band had a few things which really set it apart, if nothing other than the bang-for-buck you get from the $15 pricetag of the previous models. The Mi Band has since rapidly evolved with the addition of a heart monitor and now with the addition of a display and soft button. In this review I cover the Mi Band 2 and all of it's goodness - as well as its flaws.
The Mi Band has had a fairly dramatic facelift. While it's still roughly the same physical size, it now sports a small OLED display which is customizable, and the aluminum on the Mi Band has been replaced with "scratch-resistant" glass. Located under the display is a small circular touch-sensitive soft-button which is used for navigating through the Mi Band 2's options.
On the underside of the Mi Band 2, much like the Mi Band 1S, you find a heart rate monitor. The core of the Mi Band 2 slips into a black Dow Corning TPSiV band, with other colors being made available for purchase separately.
The 'buttons' on the face of the band, as well as on the strap, both have the same design as the back of the Xiaomi Piston 3.0 headphones , suggesting that Xiaomi is looking to unify their product line design.
The feature set hasn't changed too much, apart from the display which has added some things and simplified others. You can see my coverage of the features for both the original Mi Band and Mi Band 1S if you'd like additional information.
The display defaults to time, but a user can also enable the wearable to display steps, distance, calories, heart rate and remaining battery. It uses the display for things like notifications as well: receiving a WhatsApp message will show a WhatsApp icon on the display, and similarly for Facebook and a number of other apps. Just to make things clear, though, I've only ever had the aforementioned happen to me once. It almost always shows the SMS icon for WhatsApp, and the generic 'app' icon for everything else.
The display is also used to indicate that a firmware update is taking place and, in combination with the soft-button, can be used to trigger heart rate monitoring without the use of a phone.
Moving your hand up to look at it automatically turns the display on to tell you the time. This was significantly flawed, though, and I'll cover it a bit further down in more detail.
The ability to monitor your heart rate was introduced in the second generation Mi Band 1S, but it appears to have been slightly improved in this generation. Successive firmware updates had my Mi Band 1S having wild swings with what it thought my heart rate was, but I'm not getting this on the Mi Band 2. Hopefully future firmware updates don't negatively impact the current performance.
Although measurements for heart rate are accurate for the time being, new issues have emerged with the Mi Band 2, as it refuses to work about three out of five times. It would attempt to take a measurement, but after about 20 seconds it would display "x--" on the display instead of a heartrate. It's definitely not the way it's being worn, as I've tried several different positions. I put the flaw down to almost certainly being just shoddy firmware instead.
The sleep monitoring feature is pretty great in that it knows a little too well what time you've fallen asleep and what time you've woken up, and it also tries to throw some analysis in there about the wearer's sleep cycles during the night.
Personally I believe the latter is bunk , mostly because determining 'sleep cycles' through a tri-accelerometer is still practically impossible , but just knowing what time you've fallen asleep or how many hours you've slept is worth its weight in gold when trying to figure out what works for you.
For about a week with the Mi Band 2, it stopped recording anything and failed to sync several times. I haven't experienced this with previous generations of the Mi Band, and the only way I can usually make it sync is to restart my phone.
Although the Mi Band 2 displays steps on the display, the only way to see your sleep data is to sync it to your phone through the companion app.
The Mi Band 2 keeps track of every step you take at all hours of the day, and also tries to figure out if what you're doing is intentional cardio-heavy or if you're simply just walking. It doesn't always get this right, but it does seem to be able to pick up on things I wouldn't expect it to. For example, I was late for a train, so I started walking a bit quicker, the Mi Band 2 registered that as "activity" rather than continuing to see it as a walk.
It also estimates the distance you've walked or ran and the calories you've burned while doing so. All of this is also viewable by just touching the soft-button on the display, so you don't need to check your phone to see how far you are from your daily goal.
This is the real kicker for me. I'm not sure what's going on with my Mi Band 2, but I burn through about 20% a day. This is significantly more battery use than both the original Mi Band and Mi Band 1S, and it results significantly less battery life than Xiaomi's claimed 20 days.
I mentioned the display having issues earlier, and it was in regards to coming on all the time. Literally every five to ten seconds, all day, everyday, when my arms are not dead at my sides. This means that whenever I'm at a desk, whenever I'm driving, or whenever I have my hands on my lap, the display is flashing on and off.
I spoke with a representative from Xiaomi about this specifically, and his response suggests I may have a defective unit. This is definitely possible, but Xiaomi isn't getting away with it that easily based on the rest of the issues I've been having.
Fortunately there was a fix, the Xiaomi rep told me I'm able to disable the feature. When I did, battery usage dropped significantly.
The Mi Band has come a long way since I first reviewed the original a while back. It has grown in features, design, and even price. The overall build quality of the device - as in whether or not it could take a beating - is still solid. I'm a big fan of the TPSiV band, and I've never had a Mi Band die on me.
The issues with the Mi Band 2 range from things as simple as syncing to very important things like battery life, and features they often tout like heart monitoring are almost useless if they only work once every couple of days.
Most of my experiences with all of the Mi Band generations have been great, but the Mi Band 2 - at the time of writing this - is nowhere near ready for purchase. I expect multiple firmware updates, multiple app updates, and at least one model revision before it becomes what it set out to be.

 

 46 

The New IT: Driving Business Innovation With Tech

For years, IT professionals have been exhorted by their leaders, their colleagues, and assorted industry pundits to better connect IT to business goals. It's a core strategy many have neglected because they're locked away in data centers.
However, in the course of my business, I am starting to see an increase in the number of IT leaders using specific strategies to focus on deriving real business outcomes from the technology they use every day. The approaches they're trying include modernizing infrastructure, exploring ecommerce, and looking for opportunities with connected devices, mobile, wearables, and the sharing economy.
Traditionally, IT executives have focused on buying various components, such as servers, storage, and software from different vendors, assembling the pieces like a puzzle into their own systems, and hiring specialized staff to maintain the systems. With this model, the IT organization ends up spending more money than it should, and dedicating too much time, on an endless cycle of integration, configuration, tuning, and testing.
[Worried about keeping secure in the cloud? Read 7 Ways Cloud Computing Propels IT Security .]
As digital work environments become commonplace, forward-looking IT leaders are not content to sit back while a chief digital officer gets to own the company's modernization budget. Gartner forecasts worldwide IT spending will total $3.49 trillion at the end of 2016, a decline of 0.5% from 2015 spending of $3.5 trillion.
Instead, IT and non-IT leaders alike are choosing to spend on nontraditional digital and business technology solutions. Business technology buyers are actively finding ways to free up capital, invest in new technologies, and deploy new capabilities for new business opportunities. They're shifting investment toward modern, agile capabilities, such as cloud computing, sharing services, and bring your own device.
I'm seeing many traditional industries -- such as banking, insurance, and government -- adopting what I call "new IT" approaches to reduce capital expenditure, modernize systems, and free up budget for new business-relevant initiatives.
From where I sit, the "new IT" transition has not been easy. Poor visibility is perhaps the biggest challenge I see holding IT teams back from digital innovation. I'm talking about poor visibility into business goals, application delivery, technology operations, delivery costs, and how the customer is affected.
Visibility is further limited by new force-multipliers, such as the proliferation of user-driven applications, an increase in the number of connected systems, new automation tools, and adoption of serverless techniques like "X"-as-a-service and APIs.
Instead of trying in vain, like King Canute , to turn back this digital tide, "new IT" leaders are starting to accept this complexity and focus on improving visibility into their many disparate systems.
For example, one of our customers in online retail has deployed a new reporting capability to establish a direct line-of-sight into all the stages of digital service delivery -- from planning to development, through quality assurance and staging, and into ongoing operations. This view enables the company to efficiently allocate resources, stay on top of the unexpected, and spend less time on troubleshooting. Even better, the company is able to drive digital innovation in product development, market engagement, customer loyalty, and business value.
DevOps is another approach I am seeing "new IT" leaders use to enable business innovation. The fifth annual RightScale State of the Cloud Survey polled more than 1,000 IT professionals.
According to the survey, respondents who said they their enterprises had adopted DevOps increased from 66% in 2015 to 74% in 2016. More than 80% of respondents said they are now using DevOps principles for application delivery.
I work with one large SaaS business that commits new feature code daily, and provides product teams with feedback on exactly how customers are using its service. Working closely with both Dev and Ops teams, business leaders can try out new capabilities, iterate quickly, and measure real business results.
They can then rapidly double down on successful innovation, while quickly pivoting when things don't go quite as planned. With the right systems and technologies in place to deliver insight, DevOps connects application delivery with business goals and customer experiences, and helps business leaders work directly with IT on iterative, innovative approaches.
Every company is becoming an analytics company as new types of data pour in from new digital devices, systems, and applications. This data has incredibly valuable information on customers, product, partners, and operations, but even the most analytically oriented company is challenged by the amount and diversity of data received.
"New IT" leaders meeting this challenge most successfully appear to be those who connect these many data sources together to establish a common data fabric. This is accessible and meaningful, not only for IT to solve development and performance problems, but also for business leaders to gain actionable business insights.
For example, one gaming business I am working with has started tracking and analyzing website metrics every day, not only measuring application speeds and feeds, but also uncovering customer activity such as wagers made, new users signing up, money paid out, and cancelled subscriptions.
Connecting IT delivery directly with business goals is enabling the company to make data-driven technology decisions, creating measurably better business outcomes. To stay competitive, organizations need to drive innovation, not only with their products and services, but also in business approaches and finding new strategies to exceed business goals.
By modernizing infrastructure, ensuring visibility, exploring new technologies such as cloud computing, and adopting techniques such as streamlined DevOps and common data fabrics, IT can sit at the center of business development and take an organization to new heights. Aligning IT with business goals from the get-go gives companies a competitive edge and sets the standard for success.

 

 47 

Research: Exploring the connection between DevOps and digital

The road to digital transformation is a well-trodden one, and it’s pretty unusual these days to come across an organisation that is not marching down it. From government departments to golf clubs, from...

 

 48 

Agile Vs. DevOps: 10 Ways They're Different

Agile discipline is in the process of taking over much of the enterprise world. It's not only because executives like saying their organization is agile. It's because agile discipline in its various incarnations can work very well for companies looking to be responsive to customers and nimble in the face of changing business conditions.
Agile methods can be used as part of DevOps -- a portmanteau of "development" and "operations" --  which is also becoming more and more popular in the enterprise world. The two words, agile and DevOps, are so popular, and used in so many different ways, some executives and pundits seem to consider them interchangeable. While convenient, such use can lead to real problems.
Why? Because agile and devops are not the same thing. Treating them as the same thing can cause departments to abandon good and safe practices in the pursuit of something undesirable. So, let's take a look at what these two trendy disciplines are, how they work together, and why they're not the same thing at all.
[ What would you do if you had someone shadowing you all day? Read Adventures In Pair Programming . ]
Now, because each of the terms we're exploring is rather broad, there's plenty of room for discussion about their meanings and uses. I'm quite OK with that. Once you've reviewed the differences highlighted here, I'd love to hear your ideas about what I've gotten wrong. I'd also like to hear how you've experienced agile or DevOps, and what you think about the ways in which they relate.
I'll look forward to the conversation in the comments section below. In the meantime, let's start the discussion with a couple of key definitions.

 

 49 

10 Hot Smartphones To Consider Now

Choosing the best smartphone can be a difficult proposition. You may already have particular platform preference , which limits your options. Or perhaps you favor a particular service provider based on reception in your area. 
Whatever you choose, there will be something more tempting soon enough. The smartphone product cycle ensures that.
At the moment, Apple and Google are worth watching, both for what they have planned and for their responses to what other hardware makers like Samsung and Lenovo have introduced already this year.
Apple's iPhone 7 , due this fall, has people worried because reports suggest it won't have a analog headphone audio port. Instead, it is expected to include a Lightning port that can handle charging, data, and digital audio.
The reason this is worrisome is that digital audio can be subject to technological controls like DRM , unlike analog audio. It also likely means that third-party vendors will have to apply to Apple's MFi program, which involves rules and fees, to create hardware like headphones that work with the phone's proprietary Lightning port.
There may be benefits to iPhone 7 customers in the form of reduced phone size or more room for other components like the battery. But the cost appears to be reduced freedom to create peripherals that connect to the analog audio port, reduced peripheral choice, and increased peripheral cost, to offset MFi licensing fees. Apple reportedly reduced those fees in 2014, but those fees still figure into hardware makers margins and prices.
The iPhone 7 also concerns investors because in April Apple  reported  a decline in revenue and iPhone sales, after years of uninterrupted growth. IDC's explanation for this was that the changes from the iPhone 6 to the 6S were insufficient to drive upgrades. At least that's part of the story.
In any event, there's pressure on Apple to add new features that really make the iPhone 7 desirable and unique. Unfortunately for Apple, one of the most obvious possible features, water resistance, is already available in Samsung's Galaxy S7 and S7 Edge. Apple, which sued Samsung years ago for copying the iPhone, now appears to be copying Samsung. While there's lots of idea borrowing among tech companies, that's not the sort of market perception Apple wants.
[See Mobile App Development: 8 Best Practices .]
Google meanwhile has promised to deliver developer versions of its Project Ara modular phone this fall, with general availability planned for next year. Project Ara has been scaled back a bit -- the CPU, display, and RAM won't be removable -- but it still has potential to change the dynamics of the smartphone market.
Other handset makers like LG are already experimenting with limited modularity. If Project Ara succeeds, smartphones may become a bit more open and more conducive to third-party participation from peripheral makers.
But Google has to demonstrate that Project Ara phones won't just be bigger and more expensive than smartphone designs that don't contemplate expansion or modification.
While we wait, here are nine great smartphones you can pick up today, and one to look forward to in a few months. Take a look and let us know what you think in the comments below. Would you consider these models? Did we miss your favorite smartphone?

 

 50 

Chromecast functionality arrives in Chrome 51

Up until now, the Google Cast extension was needed in Chrome to send content between the device and a Chromecast. But now with Chrome 51, Google has integrated Cast functionality directly into the browser itself. Google says that if you have the Google Cast toolbar icon extension, there is no need to remove it because it'll still offer quick access to the Google Cast functionality.
Casting between the browser and a Chromecast connected device works by allowing users to stream any page to the Chromecast connected device (ie HDMI TV). To access Cast without the extension installed users need to head over to the Chrome Settings menu and go to the Cast option which is found between Print and Find midway down the menu. Alternatively, users can right-click the tab that they want to cast and then select Cast from the pop-up menu.
As well as integrating the Casting feature, Google has simplified the process. In earlier versions of the Cast toolbar icon extension, you had the option to set the resolution, bitrate, quality, and other features when mirroring the contents of a tab to the Cast device. Going forward, these options have been removed, instead, the system will now automatically adjust the settings based on the quality of the network.
The last major change to Google Cast in Chrome 51 is the ability to cast tabs into Google Hangouts .
Google says it is rolling out the functionality. The rollout appears to be staggered as not everyone can see the Cast option in the menu just yet.
Source: Google

 

 51 

Intel code leaves systems vulnerable to attacks; flaw used to bypass all Windows security

Security research, Dymtro Oleksiuk , has uncovered a flaw in Lenovo machines which affects the BIOS, leaving systems vulnerable to attack. Lenovo's Product Security Incident Response Team (PSIRT) is now aware of the UEFI vulnerability which it says was reported as part of an uncoordinated disclosure by Oleksiuk.
Lenovo PSIRT claims that it made several attempts to contact Oleksiuk after he stated over social media that he would disclose the UEFI-level vulnerability in Lenovo's products. Following this, Lenovo conducted its own investigation:
Following the Lenovo announcement, Oleksiuk took to Github to say that the vulnerability was actually fixed by Intel in the middle of 2014 but didn't issue any public advisories. Although it was fixed, UEFI firmware is sometimes slow to be updated so the vulnerable code could linger around on many devices, for a while.
What's worse is that this was quickly confirmed to not be limited to Lenovo ThinkPads as originally thought. The flaw was discovered in code used by Gigabyte motherboards, HP systems and more.
The exploit can disable the write protection of firmware, meaning that Windows security features, such as Secure Boot, can be disabled. Embarrassingly for Lenovo in its security advisory , it says that the severity of the bug is “high” and that the scope of impact is “industry-wide”. Lenovo is working with Intel and other IBVs to fix the issue as quickly as it can. The only good news is that an attacker would need physical access to a device before deploying ThinkPwn.
Source: Dymtro Oleksiuk , Cr4sh blog

 

 52 

Twitter estimates that it has 10 million users in China

Twitter has been blocked in China since around 2009. That seriously hampered any hopes that the U. S. firm — which is struggling to grow its userbase worldwide — had in China, but the service does still have a hardcore base of loyalists who use it in the country.
Previous estimates have pegged Twitter’s China-based following as being as high as 35.5 million users, but the actual number appears to be much lower. According to a source inside Twitter, who spoke to TechCrunch on the condition of anonymity, its service has around 10 million active users in China.
That number itself is an estimate, one that is used internally, because even Twitter isn’t fully sure. That’s because it is hugely challenging to tally up China-based users by virtue of them connecting to the service via VPN software which allows them to use an internet connection outside of China, thus bypassing the country’s web censorship system. So, a China-based user accessing Twitter on a VPN will show up as being located in the U. S., UK, Singapore or wherever else their VPN is set to.
Either way, the figure is small: 10 million represents a mere drop of Twitter’s 310 million total active users  — it counts 65 million of those in the U. S., its largest single market, with the remaining 245 million overseas. It is also a tiny fraction of the 688 million internet users in the country, according to government data. That figure, taken from December 2015, means that, for the first time, over half of China’s population is now online.
Twitter has a growing business in China, anchored by an executive who  controversially spent time in her early career working on military security for the government, but its userbase in China isn’t part of that. Local services like chat app WeChat, which contains social features and has multiple hundreds of millions of active users in China, and microblogging service Weibo are the dominant social media. Instead, Twitter makes money in China by offering Chinese companies and media a gateway to global audiences by advertising on its service. That’s exactly the strategy that Facebook employs , with both services proving to be popular advertising and distribution channels for China’s state-run media, which among the most lucrative clients.
If it isn’t critical to its business, why is Twitter’s internal estimate for its China userbase worthy of note?
Well, it helps sketch out a figure for the number of people who use VPNs in China, an oft-influential section that includes journalist, activists, prominent business leaders and decision makers. It is hard to fully quantify this block of internet users, though, even with Twitter’s estimate. Not everyone using a VPN will use Twitter, but it is likely that a large portion of VPN and Twitter users overlap. Certainly you can’t use Twitter in China without a VPN.
In that respect, Twitter’s China estimate — while not mission-critical for Twitter’s business — gives a glimpse at the number of ‘global web consumers’ who leap over China’s Great Firewall of censorship to read and consume whatever they want on the internet.
“If there are 10 million Chinese on Twitter that is great news. Twitter is one of the last great bastions of relatively free speech. When was the last time 10 million Chinese could freely say what they feel, on any platform, without fear of reprisal?” Charlie Smith, one of the pseudonymous founders of anti-censorship group Great Fire, told TechCrunch via email.
Great Fire released a ranking index to give greater clarify around China-specific VPNs today , and Smith said he believes that “there are growing pockets of ordinary people who want to circumvent censorship so they can access whatever information they choose.”
“It’s fantastic that Twitter can provide us with an indication of how big this group might be,” the Great Fire founder added.
It is highly unlikely that Twitter will take the necessary steps that might get it unblocked in China — that would mean caving in to censorship demands and self-policing user content, as LinkedIn has done — while there’s already fierce competition in social media, so don’t expect that China-based user number to grow much any time soon.

 

 53 

The user guide to early stage fundraising

Over the last decade, the early-stage funding environment has dramatically changed. There are now myriad financing options that founders can consider as they look to build their companies. Nearly 70,000 companies received funding through angel networks and 3,000 through venture capital firms annually, according to CB Insights.
On the most recent episode of Ventured, we spoke with Qasar Younis, Chief Operating Officer of Y Combinator (YC), about the early-stage funding landscape and how entrepreneurs can best navigate the waters of raising capital today. Here are some takeaways from our discussion.
Benefit from more accessible investors
The startup ecosystem is more sophisticated than ever before because of global availability to startup resources and new types of funding sources. With platforms like AngelList and Indiegogo, access to early capital has dramatically improved. Investors like YCombinator (YC) and KPCB have continued to increase funding accessibility for founders regardless of location. Programs such as KPCB Fellows or KPCB Edge target entrepreneurs earlier in their careers while the YC Fellows Program and the YC College Tour seek to educate new entrepreneurs on how they can begin their journeys as founders.
Consider all funding options before tapping VCs
There are roughly four ways to get funding for your startup. Understanding your funding options and thinking critically about each path is crucial to your success and is often overlooked.
Bootstrapping: This is how the majority of companies are funded today. The benefits here are that you retain maximum ownership of your company. However, this may not be sustainable as your capital requirements grow.
Incubators & Accelerators: If you are a first time entrepreneur, it can often times be helpful to join an incubator or accelerator to get your business going. While there’s a variety of these that exist today most usually provide mentoring, content, and a small amount of capital.
Online Platforms: There are a number of funding platforms available online. As a founder you can utilize these to get a sense of demand for your product, find angel investors from across the globe, and get feedback on your company.
Venture Capital: While some founders may jump straight to venture capitalists, most usually reach this step later in the life of their companies. By utilizing the options, or a combination of options outlined above, you can prove more out as a founder prior to meeting investors.
Don’t worry too much about today’s macro environment
While the current economic environment has been fluctuating over concerns of global growth and European solidarity, early- stage founders should not panic. The macro funding environment does not necessarily constitute a barrier to achieving success. Oftentimes, downturns provide unique opportunities for entrepreneurs to succeed because it’s harder for competitors to raise capital, and talent is usually cheaper to hire. For instance, more than half of the companies on the Fortune 500 list in 2009 were started during recessions or bear markets, as well as almost half of the firms on the Inc. list of America’s fastest-growing companies in 2008. In the most recent economic turmoil of 2009, both WhatsApp and Square were started.
Great companies are founded irrespective of a boom or bust. Startups are a test of will and determination and as a result are often on a seven- to 10-year time horizon, if not longer.
Stay focused on customers & users
While many entrepreneurs don’t realize it, they may be going through the motions and simply doing things that look and feel like work but aren’t actually creating value that will ensure long-term success. Two areas that highlight this gap are customers and product fit, or making stuff that people really want. Not enough entrepreneurs truly understand their customers, especially in the early days, even though that understanding will help dictate product and roadmap decisions. Similarly, founders need to be able to explain why customers actually want the product they are creating, since that insight will help drive almost any business forward.
Know that VCs invest in people, not pitch decks
Although we evaluate certain metrics that help us gain conviction about a particular company, we often invest in the intangibles — the things that are hard to get across on paper. We find ourselves asking questions like how do the founders work with each other, how do they communicate, what do they know that no one else knows, and how are they uniquely positioned to solve this unique problem? Having conviction about the team beyond quantifiable growth or user metrics is a major driver for how we decide to invest in companies.

 

 54 

Silent Circle silently snuffs out its warrant canary — but claims it’s a “business decision”

Silent Circle, the maker of encrypted messaging apps and a security hardened Android smartphone, called Blackphone, has discontinued its warrant canary.
Attempting to reach the page where it was previously hosted results in the following notification:
Warrant canaries became popular in the wake of the 2013 Snowden disclosures revealing the extent of government surveillance programs, as a tacit route to signify to users when a service might have been compromised by a government request for user data.
Canaries act as a workaround for U. S. gag orders which prevent companies publicly disclosing warrants for user requests by publishing an explicit statement that they have not received any warrants for user data to date — allowing for the reverse to be signaled if a canary is removed or not updated.
TechCrunch was tipped to Silent Circle’s dead canary by a reader, however the company claims it discontinued the canary as a “business decision” — not because it has received “any warrant”.
“We have not received a warrant for user data,” Matt Neiderman, Silent Circle’s General Counsel told TechCrunch. “As part of our focus on delivering enterprise software platform we discontinued our warrant canary some time ago. The decision was a business decision and not related to any warrant for user data which we have not received.”
The company has run into problems with its warrant canary before, including in March  last year  when it missed out a statement in an update, which they subsequently added. So it has something of a checkered history here already.
At the time of some of the previous problem Neiderman claimed the company had not received warrants “of any type”. But his denial in the latest instance is arguably a little less explicitly worded. We’ve asked him to confirm whether Silent Circle has received a warrant of any type to date and will update this post with any response.
Although it’s also worth noting the company is not headquartered in the US — previously moving its HQ from the Caribbean to Switzerland on account of what it said were “world best” constitutional privacy protections in the European country. (However other non-US based encrypted comms companies, such as Germany’s Tutanota , do continue to maintain a warrant canary for transparency and good practice purposes, despite not being subject to legal gag orders in the country where they are based.)
Discussing Silent Circle’s decision to discontinue its warrant canary, UK based security commentator Graham Cluley  suggested the move does look odd.
“I would think a company like Silent Circle would have enough nous knowing that if it was to discontinue its warrant canary plenty of people would be concerned. So the sensible thing to have done — if it had been some sort of business decision, and I can’t imagine it’s really that much work maintaining a warrant canary — would have been to have been quite public and open and transparent about it,” he said. “But to silently kill it off seems odd.
“If this really was a business decision why not be open about it? Especially for a company which works in those sort of circles… You would [also] expect that discontinuing something like this could be bad for their business. Could raise concern among their customers. So it seems an odd business decision to make.”
The same tipster who pointed TechCrunch to the dead canary also claimed that a recent Silent OS update to Blackphone’s default apps requires increased security permissions, such as access to the camera, which can no longer be disabled by users.
Silent OS 3.0 was released towards the end of June, and is billed as including various security fixes and features, such as a new Privacy Meter integrated into the Security Center which notifies the user when a security/privacy threat is present and indicates the severity and potential actions to mitigate it, and a CIDS (Cellular Intrusion Detection System), to warn of potential threats in the cellular network interface, such as weak encryption and device tracking via silent SMS. It’s based on the latest release of Google’s mobile platform, Android Marshmallow 6.0.1, and also brings various UX changes to Silent OS’ platform.
There’s no explicit mention of increased permissions in Silent Circle’s blog post about the major platform update. We’ve asked Silent Circle to confirm whether it has increased permissions for its apps in Silent OS and if so, for what purpose, and will update this post with any response.
Cluley  told TechCrunch that increased app permissions might be needed to support new features on the platform but again said the onus would be on such an apparently security-focused company to be very clear about its intentions here.
“You would hope if they’re changing their permissions they’ve got some sort of explanation as to why they would need to access your camera, for instance. Maybe it’s to scan in QR codes, maybe it’s for some sort of facial recognition biometric going forward,” he said.
“We do have to be careful about apps and the chance of new permissions creeping in stealthily if you like, and people not realizing that they are granting more permissions than when they initially installed an app. So I think some transparency’s called for.”
“In that kind of climate, wouldn’t a warrant canary be a good thing?” he added.
Adding to the uncertainty here, Silent Circle has undergone some significant employee shifts in recent months, losing two key co-founders: veteran crypto expert Jon Callas and its chief scientist Javier Agüera. We’ve also heard reports of wider staff cuts, although it is not clear whether the co-founders’ departures were voluntary or not (Callas has since taken up a role at Apple).
In addition, a lawsuit filed against Silent Circle by a business partner last month in a New York state court claims the company, which has raised $80 million to date from investors (most recently taking in $50M in February 2015 ), has failed to pay a $5M debt, according to a report on the Law360 website. The suit further claims it is considering bankruptcy after several major distribution deals fell through.
We’ve asked Silent Circle for comment on the lawsuit and will update this post with any response.

 

 55 

Apple urges organ donation via new iPhone software

SAN FRANCISCO (AP) — Apple wants to encourage millions of iPhone owners to register as organ donors through a software update that will add an easy sign-up button to the health information app that comes installed on every smartphone the company makes.
CEO Tim Cook says he hopes the new software will help ease a longstanding donor shortage. He told The Associated Press that the problem hit home when Apple co-founder Steve Jobs endured an "excruciating" wait for a liver transplant in 2009.
Apple is adding the option to enroll in a national donor registry by clicking a button within the iPhone's Health app, which can be used manage a variety of health and fitness data. The software will come to all U. S.-based iPhones when the company updates its mobile operating system this fall.

 

 56 

Enterprise NPM users to get help with security, licensing

NPM Inc, which oversees the popular NPM registry of JavaScript modules is enlisting outside help to provide guidance on security, code analysis, and licensing issues.
Under an expansion of NPM Enterprise to be detailed today, NPM Inc. will partner with third parties to take care of auditing of modules via its NPM Enterprise add-ons service. The current NPM Enterprise product takes the NPM open source registry code base and allows large companies to use it behind their firewall, sharing and reusing code and building private modules not shared on the public registry. Until now, users have had to conduct their own audit processes of modules.
Initial partners include Fossa, which will offer license compliance assistance; bitHound, for code quality analysis; and Lift Security for the Node Security Platform, providing a database of known vulnerabilities in code. The partnerships let experts in capabilities like security and license compliance annotate what NPM Inc. has been doing and eliminate the manual, tedious processes for companies so developers can pick the best open source modules, said Benjamin Coe, general manager for NPM Enterprise product at NPM Inc.
While NPM Enterprise is a fee-based service, some add-on services will be free of charge, such as bitHound's services, at least at first, Coe said. Others, including Fossa, would charge a monthly fee. "It's basically up to the third party," he said. "We're just opening up our platform where anyone can write something on top of it. "
More partners will be sought to cover additional capabilities. One possibility is analytics, providing information about the behavior of users of a module.
Add-on services eventually could be added to the public registry, said Coe. The NPM registry, popular for use with the Node.js server-side JavaScript platform, features 300,000 open source modules for capabilities like Web servers and front-end JavaScript frameworks. The online registry is accessed via the NPM package manager.

 

 57 

Mogees Play turns any surface into a music and gaming device

The Mogees Play is the latest product from London-based startup Mogees. Based on the same contact microphone and machine-learning technology first seem in the company’s original product, the Mogees Pro, it promises to turn any surface into a music and gaming input device, bridging the physical and digital worlds in new and delightfully creative ways.
Once again, Mogees is launching a Kickstarter campaign to brings it wares to market, but unlike many crowdfunding campaigns, which I tend to be very hesitant to cover, the startup has form in shipping product and has to date sold thousands of Mogees Pros. The Mogees Play hopes to build on that legacy with a more mass market device that fulfils founder Bruno Zamborlin’s mission to introduce non-musicians to the technology and encourage everybody to begin making music and exploring their creativity right out of the box.
The Mogees Play will ship with three iOS apps: Mogees Pulse, a rhythm game, which is a little reminiscent of Guitar Hero (and has the backing of Guitar Hero founder Charles Huang); Mogees Jam, a recording studio in your pocket that enables you to build rhythms, melodies and loops using the acoustic properties of any object a Mogees Play is attached to; and Mogees Keys, which is a ‘smart’ keyboard to trigger melodies, arpeggios and chords using the Mogees Play.
“The vision with Mogees Play is to open up music – playing it, making it – to a much wider audience, including gamers and people who are just starting out,” Zamborlin tells me. “You don’t have to have a musical instrument or controller or joystick or whatever, you just need a Mogees Play and a smartphone and you can play wherever you want using whatever you want: like a park bench, or a coffee cup, or even an airplane if you have one nearby”.
(The Mogees team recently attached 6 Mogees Pros to an airplane to turn it into one giant music instrument.)
Last week I saw a demo of all three apps in action and, frankly, it was one of those ‘why hasn’t anybody done this before’ moments. Each utilises simple contact microphone technology — or vibration sensor, as it might more accurately be called — but it’s the startup’s machine learning and modelling software that brings it all to life.
The latter enables the software to quickly learn to interpret vibrations picked up from any surface the device is attached to, such as a coffee cup, desk, or any object that creates enough vibration for the Mogees Play to sense. In a musical capacity, this means a Mogees Play can interpret note on and off information, but also things like velocity (how hard you strike a note) to trigger sounds.
“When you’re a 10 year old kid, you probably have more creativity than ever in your life, but very often the process of music making is not seen as a fun, creative process but rather a rigorous discipline that needs to be studied diligently. I wanted to break this paradigm,” explains the Mogees founder.
Zamborlin also tells me that music is a perfect use-case to advance machine learning as it has its own very specific constraints, including being intolerant of latency. “Music technology is an ideal test bed for human-computer interaction, because instruments need to react to our actions in real-time and yet still be extremely expressive,” he says.
In addition, the team have worked hard to reduce the time it takes for the Mogees Play to adapt to a new surface or user’s input. After all, a musical instrument wouldn’t be very playable if it didn’t respond in the way expected each and every time, once you’ve mastered it, of course.
But in effect, regardless of application, the Mogees Play turns any surface into one that can interpret touch (in its own unique way) and as a result the possibilities are endless. A casual rhythm game and two simple music apps are likely just the start. Not least because later this year the startup will open up its API.
“The Mogees technology has now reached a surprising level of accuracy and expressivity, we can capture properties such as velocity, timbre and length of a gesture with incredible accuracy,” says Zamborlin. “Therefore we’re opening up the API because we’ve got a really engaged community and we know that they’ll come up with loads of amazing ideas we’d never thought of for the Mogees technology. We want to allow everyone to turn everyday objects and the world around us into a tactile interface for creativity”.

 

 58 

Finally, a service that tests and ranks the best VPNs for China

Using the internet in China is hard if you want to go beyond the Chinese web. Not only are many popular websites and services blocked in the country, such as Facebook or Twitter, but the general speed of international websites is painfully slow.
Some companies, like Cloudfare , are trying to fix this with architecture, but VPN (virtual personal network) software is the nearest thing to a sure fire way to use non-Chinese internet in China. VPNs essentially build a tunnel to let you use the internet as if you are in another country, where, for example, Facebook isn’t blocked.
It’s the “nearest thing” and not a full a solution because, in many cases, regular run-of-the-mill VPN don’t work in China, where a very particular type is required. Even when you find one that does work in China, reliability is a major issue. The Chinese government has cracked down on VPNs and the web more than usual over the past two years or so, that has included nullifying popular services and, in one case, visiting a developer in person to have software shuttered.
Great Fire , the non-profit entity that we profiled last year which is dedicated to fighting internet restrictions in China, has launched a new service today that — it hopes — will provide a much needed increase in clarity and guidance for finding VPNs that are effective in China.
Circumvention Central tests the speed and reliability of VPNs on actual websites not just servers, and on an ongoing basis. The result is that, rather than a static list or collection of VPNs that you could plump for, as you’ll see on many blogs and websites, Great Fire will provide a living, breathing rank of those that work best, and how they have performed over time.
The idea, Great Fire’s pseudonymous founders explained in a blog post , is to change the culture of “keeping quiet” about quality VPNs (for the fear that publicity will be the downfall) and enable anyone to find a solution that works.
The site already lists more than 10 services, most of which are familiar to Chinese internet users, and Great Fire said it intends to work with more VPN sellers and developers to increase the choice and also help make software perform better in China.
When I put it to them that placing all this valuable data in one place puts a target on VPN services, co-founder Charlie Smith — not a real name — argued that there is nothing new here.
“The authorities already know about all VPNs. It is naive to assume that sharing information about those that work will be helping them in their efforts. The authorities are the only ones who benefit from the secrecy surrounding quick and stable VPNs that work in China,” Smith wrote via email.
The final, important piece, of the news today is that the organization is aiming to make money itself. As of now, Great Fire relies on donations from anonymous individuals to keep doing what it does. That can be tricky. It seemed to catch the eye of the Chinese government last year following a media campaign — the upshot of which was an unprecedented high traffic DDOS on its webpages and Github account , which took Github down for days and generated huge server bills for Great Fire to cover. It managed to emerge from the situation, but things didn’t look good at one point.
“We want to reduce our reliance on these organizations and set GreatFire.org on a path of self-sustainability,” the founders, each of whom has their own job and doesn’t know one another outside of the project, wrote.
The company will use the site to resell VPN software via affiliate links on the site. So, if you find a service, clicking the link to purchase it will earn the organization a finders fee for sending your business to the VPN maker.
The project and the early ranking can be found at the  Circumvention Central website .

 

 59 

Wi-Fi sharing community Instabridge picks up backing from Draper Associates

Swedish startup Instabridge , a Wi-Fi sharing community and mobile app, has picked up $1 million in new funding. Noteworthy for a European startup is that Silicon Valley investor Tim Draper’s Draper Associates has led the round, with participation from existing backer Balderton Capital.
The Instabridge app lets you share the details of any Wi-Fi hotspot with other Instabridge users, and provides access to Wi-Fi hotspots shared by everyone else in the community. This has enabled it to build a crowdsourced database of Wi-Fi hotspots, in addition to a list of known public venues that have free Wi-Fi, such as McDonald’s or Starbucks.
Meanwhile, I’m told that the app claims two million users, and has recruited 100,000 members to its Wi-Fi sharing community since launch. It’s growing fastest in emerging markets such as Mexico, Brazil, and India. Today’s new funding will be used to invest in growth and speed up the roll out of its app in more markets.
Instabridge co-founder and CEO Niklas Agevik tells me that since we covered the company last September, the team in Stockholm has grown from 5 to 13 people, and the company has built out a 4-person team in Brazil. As part of the latter, the startup has recruited Yelp’s former Nordic community lead to grow the Brazilian Instabridge community.
With that said, Instabridge is till pre-revenue, though Agevik says that the business model will be based recommending products and services once people come online. “We want to do more than just connect them – actually enable them to benefit from their internet connection,” he tells me.

 

 60 

Four Things Your Business Does That Seems Outdated to Programmers

Good software developers are difficult to find, with so much competition for professionals who have the latest skills. Businesses often pay top dollar to lure in the best talent, only to find they leave soon after. In fact, IT professions in general have some of the highest turnover rates of all trades, often because employees have the luxury of easily moving to a new employer for more pay or better workinng conditions.
"We're finding that a lot of in-demand tech talent are often choosing to freelance in order to take advantage of a variety of improved quality of life options," says Rishon Blumberg, Founder of 10x Management, which bills itself as the first tech talent agency. "The companies that are having the best luck attracting and retaining tech talent are increasingly offering many of these same quality of life options to their W2 employees. In addition to adapting your company to work with agile talent, the best way to keep and attract talent is to offer the flexibility that the market is demanding. "
Whether your business hires salaried programmers or relies on freelancers, you likely feel challenged to attract and retain skilled programmers. In actuality, it could be that they see some of your processes as outdated. Here are a few things your business may be doing to scare innovative developers away.
Telecommuting is on the rise, with more people working from home each year. However, there are still organizations that prefer to have employees on site, where they can be available for meetings and monitored by supervisors. Software development requires hours of focus, with distractions serving as a productivity drain. Employers who take a strict stance against telecommuting risk losing developers to the many businesses that now allow remote work. From the time you post your job ad, you may find that some of the top programmers skip it once they see that you won't allow telecommuting, giving your competitors the edge in hiring the best developers.
Even if you prefer to have employees on site at least some of the time, consider giving employees who can work from home, such as application developers, the freedom to at least telecommute two to three days per week.
When surveyed, professionals across all industries make no secret of the fact that they hate meetings. With so many collaboration tools now available, that weekly meeting to check in does nothing more than waste everyone's time. Social media-style collaboration tools can make project updates more fun, letting everyone check in with an update on a daily, weekly, or bi-monthly basis. Your employees won't be forced to sit around a table, listening to what Frank from accounting is working on this week, and you'll have a written record of everyone's responses that you can refer to when needed. You'll also avoid excluding freelancers, who often aren't included in regular meetings.
When a professional has an in-demand skillset, even one year without a pay increase can be enough incentive to start a job search. Organizations that have set-in-stone pay standards may scare salaried and contract programmers away, especially if they only offer small increases every year or two. Developers may actually be contacted by recruiters with offers of higher pay. For salaried workers, it's important that supervisors conduct regular evaluations and note any certifications or advanced skills a developer has picked up over the course of the year. If your pay is lower than market average and you can't afford higher salaries, consider bringing in freelancers whom you can work with on designated projects.
Developers design applications that automate processes. When your organization relies on outdated processes like paper timesheets or faxed documents, developers may question your business's technological integrity. Invest in processes that eliminate paper and improve productivity, such as automated HR tools and document-signing software.
Not only will these solutions improve productivity, they'll demonstrate to your employees, contractors, and clients that your business is forward thinking enough to embrace the latest technology in everything you do. If one of your developers mentions an easier way to accomplish something, listen to the suggestion and consider putting it to use. Often, your IT team members will be able to save your business time and money by recommending ways you can automate.
A business's development team is one of its most valuable assets, helping create Web sites and applications that connect with customers and make employees' lives easier. It's important to invest in up-to-date processes to attract innovative employees and keep them, showing them an innovative culture that will help them grow and thrive.

 

 61 

A Deeper Look: Java Thread Example

The concept of thread is intriguing as we dive deeper from different perspective of its construct apart from the gross idea of multitasking. The Java API is rich and provides many features to deal with multitasking with threads. It is a vast and complex topic. This article is an attempt to engross the reader in some concepts that would aid in better understanding Java threads, eventually leading to better programming.
A program in execution is called a process. It is an activity that contains a unique identifier called the Process ID, a set of instructions, a program counter—also called instruction pointer—handles to resources, address space, and many other things. A program counter keeps track of the current instruction in execution and automatically advances to the next instruction at the end of current instruction execution.
Multitasking is the ability of execute more than one task/process at a single instance of time. It definitely helps to have multiple CPUs to execute multiple tasks all at once. But, in a single CPU environment, multitasking is achieved with the help of context switching. Context switching is the technique where CPU time is shared across all running processes and processor allocation is switched in a time bound fashion. To schedule a process to allocate the CPU, a running process is interrupted to a halt and its state is saved. The process that has been waiting or saved earlier for its CPU turn is restored to gain its processing time from the CPU. This gives an illusion that the CPU is executing multiple tasks, while in fact a part of the instruction is executed from multiple processes in a round robin fashion. However, the fact is that true multiprocessing is never possible, even with multiple CPUs, not because of the machine limitation but because of our limitation to handle true multiple processing effects. Parallel execution of 2/200 instruction does not make a machine multiprocessor; rather, it extends or limits its capability to a cardinal precision. Exact multiprocessing is beyond humane scope and can be harnessed only by the essence of it.
There is a problem with independent execution of multiple processes. Each of them carries a load of a non-sharable copy of resources. This can be easily shared across multiple running processes, yet they are not allowed to do so because processes usually do not share address spaces with another process. If they must, they can communicate only via some of the inter-process communication facilities such as sockets or pipes, and so forth. This poses several problems in process communication and resource sharing, apart from making the process what is commonly called heavy-weight.
Modern Operating Systems solved this problem by creating multiple units of execution within a process that can share and communicate across its execution unit. Each of these single units of execution is called a thread. Every process has at least one thread and can create multiple threads, only bounded by the operating system's limit of allowed shared resources, which usually is quite large. Unlike a process, a thread has only a couple of concerns: Program Counter and a Stack.
A thread within a process shares all its resources, including the address space. A thread, however, can maintain a private memory area called Thread Local Storage, which is not shared even with threads originating from the same process. The illusion of multi-threading is established with the help of context switching. Unlike context switching with the processes, context switch between threads is less expensive because thread communication and resource sharing is easier. Programs can be split into multiple threads and executed concurrently. A modern machine with a multi-core CPU further can leverage the performance with threads that may be scheduled on a different processor to improve overall performance of program execution.
A thread is associated with two types of memory: main memory and working memory. Working memory is very personal to a thread and is non-sharable; main memory, on the other hand, is shared with other threads. It is through this main memory that the threads actually communicate. However, every thread also has its own stack to store local variables, like the pocket where you keep quick money to meet your immediate expenses.
Because each thread has its own working memory that includes processor cache and register values, it is up to the Java Memory Model (JMM) to maintain the accuracy of the shared values across multiple threads that may be accessed by two or more competing threads. In multi-threading, one update operation to a shared variable in the memory area can leave it in an inconsistent state unless coordinated in such a way that some other thread must get an accurate value even in some random read/write operation on the shared variable. JMM ensures reliability with various housekeeping tasks, some of them are as follows:
Atomicity guarantees that a read and write operation on any field is executed indivisibly. Now, what does that mean? According to the Java Language Specification (JLS), int, char, byte, float, short, and boolean operations are atomic but double; long operations are not atomic. Here's an example:
Because it is internal, it involves two separate operations: one that writes first 32 bits and the second writes last the 32 bits, to assign a 64 bit value. Now, what if we are running a 64 bit Java? The Java Language Specification (JLS) reference provides the following explanation:
"Some implementations may find it convenient to divide a single write action on a 64-bit long or double value into two write actions on adjacent 32-bit values. For efficiency's sake, this behaviour is implementation-specific; an implementation of the Java Virtual Machine is free to perform writes to long and double values atomically or in two parts. Implementations of the Java Virtual Machine are encouraged to avoid splitting 64-bit values where possible. Programmers are encouraged to declare shared 64-bit values as volatile or synchronize their programs correctly to avoid possible complications. "
This specifically is a problem when multiple threads read or update a shared variable. One thread may update the first 32-bit value and before updating the last 32-bit, another thread may pick up the immediate value, resulting in an unreliable and inconsistent read operation. This is the problem dealing with instructions that are not atomic. However, there is a way out from long and double variables.
Declare it as volatile. Volatile variables are always written into and read from main memory. They are never cached. That is the reason it is as follows:
Or, synchronize getter/setter:
Or, use AtomicLong from java.util.concurrent.atomic package, as shown here:
Synchronization between thread communications is another issue that can be quite messy unless handled carefully. Java, however, provides multiple ways to establish communication between threads. Synchronization is one of the most basic mechanisms among them. It uses monitors to ensure that shared variable access is mutually exclusive. Any competing thread must go through lock/unlock procedures to get an access. On entering a synchronized block, the values of all variables in the working memory are reloaded from the main memory and writes back as soon as it leaves the block. This ensures that, once the thread is done with the variable, it leaves it in the memory so that some other thread can access it soon after the first thread is done.
There are two types of threads synchronizations built into Java:
A critical section in a code is designated with reference to an object's monitor. A thread must acquire the object's monitor before executing the critical section of code. To achieve this, a synchronized keyword can be used in two ways:
Either declare a method as a critical section. For example,
Or, create a critical section block. For example,
JVM handles the responsibility of acquiring and releasing an object monitor's lock. The use of a synchronized keyword simply designates a block or method to be critical. Before entering the designated block, a thread first acquires the monitor lock of the object and releases it as soon as its job is done. There is no limit on how many times a thread can acquire an object monitor's lock, but must release it for another thread to acquire the same object's monitor lock.
This article tried to give a perspective of what Java thread means in one of its many aspects, yet a very rudimentary explanation omitting many details. Thread in Java programming construction is very deeply associated with Java Memory Model, especially, on how its implementation is handled by JVM behind the scene. Perhaps the most valuable literature to understand the idea is to go through the Java Language Specification and Java Virtual Machine Specification. They are available in both HTML and PDF format. Interested readers may go through them to get a more elaborate idea.

 

 62 

Top 10 Reasons to Get Started with React. JS

By Andrew Allbright
React is a popular framework used by most large enterprise ventures and by small lone developers to create views with complicated relationships in a modular fashion. It provides just enough structure to allow for flexibility yet enough railing to avoid common pitfalls when creating applications for the Web. In the style of a top 10 list, I will describe reasons why you should choose this framework for your next project.
One of the reasons why React became so popular was due to its video game-inspired rendering system. The basics of its system is around minimizing DOM interactions by batching updates, using a virtual memory DOM to calculate differences, and immutable state.
One thing to note is that this approach was counter to other the trends of other JavaScript frameworks at the time. Angular 1, Ember, Knockout, and even jQuery were concerned with data binding to elements on the page. However, it turns out that dirty checking two-way data bindings produces exponentially more calculations as you add more elements into the mix than one way. Angular 2 has since abandoned dirty checking and two-way bindings for a more React-like approach.
The short list of lifecycle methods make this framework one of the easiest to understand. In fact, it wouldn't be unheard of to become proficient with this entire library in under a day. This can be attributed to the "always rerender" nature of each view and how it accommodates state or property changes to its view.
To emphasize this point, look at what all you need to define a simple React component...
Your render function lends itself to terser, more immutable, functional programming that has become trendy in the JavaScript community with ES2015, ES2016.
It may seem obvious today, but when React. JS was initially introduced into the JavaScript world at the time the idea of tightly coupling your view definition with the logic that controls, it was controversial. React released into a paradigm where client-side copies of traditional MVC frameworks, like those found on the server-side, were very popular. MVC traditional separates the HTML from controllers whose responsibilities were to combine multiple views and marshal data into them. That literally means these "concerns" were separated into their own files.
The architects of React took another approach; they say the separation of HTML from JavaScript is superficial. Indeed, your HTML and JS application code were very tightly coupled, and keeping them in their own separation files was more a separation of technologies than separation of concerns. Imagine trying to change class names or id tags of HTML elements in a large jQuery application. You would have to verify that none of your DOM bindings were destroyed, suggesting a close relationship between the two.
That's where JSX comes into the mix. By putting your component logic within the same file as the view it is operating on, it makes the module easier to reason about and the best part is you can leverage vanilla JavaScript to express your view.
React is a library that defines your view but gives you lifecycle "hooks" to make server-side requests. This is an advantage because it means once you understand how XHR requests are made, you can more easily update what library you use to make these than, say, BackBoneJS. These hooks are state , props , componentWillMount , and componentDidMount (if you want to wait until late in the game).
How you organize multiple different XHR interactions is largely up to you. Common patterns include the one I've just described, Flux or Redux.
Although React is curated by the developers at Facebook, it is very much a community-driven library. Viewing the GitHub issue trackers and PR, you get a sense that the developers deputizing themselves to maintain this framework find a joy in sharing code and getting into sometimes heated debate. This is an advantage for your project because you can ensure you will get code that has been vetted by passionate developers.
In fact, communities trends inspire the architects as much as Facebook inspire the community. Redux has all but taken over Flux as a collection of libraries to create larger scale applications and this was created by someone for a conference demo. Facebook haS since embraced it as one of the best options for developers to get started with.
This is not a unique attribute for most JavaScript frameworks, but React is one of the more popular libraries that is written in pure JavaScript. Plus, it's always fun to see who has been recognized when Facebook puts up its release notes.
Large companies like Facebook, Netflix, and Walmart have embraced React as their library of choice for handling view related tasks. This vote of confidence is no accident.
React has a neat feature where it can detect whether or not it needs to initially render the DOM onto the page. That means if you precompiled the view in your server-side code before delivering to the client's browser, React would be able to simply bootstrap its listeners and go from there.
React provides the means to generate HTML from its syntax easily. This was intentional to gain favor with SEO bots, which traditionally don't run JavaScript in their crawlers (or at least mark those sites worse than pregenerated ones).
Compared to other frameworks, React's 43.2 KB is a good size for what you get. For comparision: Angular 2's minified size is 125 KB, Ember is 113 KB, although Knockout 3.4.0 is 21.9 KB and jQuery 3.0 is 29.8 KB.
React's ecosystem is vast indeed. The way the framework has been moving is towards separating view logic from "purer" business rules. By default, you adopt this strategy. This allows you to target other platforms, such as mobile, Virtual Reality devices, TV experiences, or even to generate email.
The reason you should choose React for your next project is due to its lifecycle methods, state, and props that provide just enough railing to create scalable applications but not enough to stifle liberal use of different libraries. Need XHR data? Use componentWillMount. Need to make a particular component look pretty using a well-known jQuery library? Well, use componentDidMount with componentShouldUpdate or componentDidUpdate to stop DOM manipulations or restyle the element after changes easily.
The point is there is just enough railing that correspond to natural component life cycles within the page to make a great deal of sense to developers of any experience level but not enough to where there is a "React" way of doing things. It is very versatile in that way.
Now that you've read this list, I hope I've inspired you to find a React boilerplate repo and get started on a new project. React is fun to work with and, as I've laid out, there are so many reasons why you should choose this framework over others.

 

 63 

Stream Operations Supported by the Java Streams API

Stream APIs is one of the most sophisticated implementations in Java. Stream APIs are mainly used in association with the collection framework. Sometimes, confusion arises in its usage due to such association. This is primarily because it inherently resembles the collection data structure and can be better understood when compared with it. If a stream is a collection of data elements, the generic collection is also a data structure that acts as a container for containment. They are complimentary and often used interchangeably in Java code and, for the sake of understanding, we'll compare and contrast them head on. This article takes on the concept of streams from a comparative perspective to illustrate some of its usage in regular Java programming.
A stream basically is a sequence of data elements that support sequential and parallel aggregate operations. Aggregate operations are like computing the sum of integer elements in a stream or mapping a stream according to their string length, and so forth. A stream supports two types of operation with reference to the way they pull data elements from the data source; one is called lazy or terminal operation and the other is called eager or intermediate operation. Lazy pulling means data elements are not fetched until required, and the strategy of the stream is, by default, lazy. Terminal operations are particularly suitable for working with huge streams of data because they are minimal on memory usage. Eager or intermediate operations are best for performance but uses a lot of memory. For faster response, data elements must be made available in memory. As result, the limitation of its extensiveness is restricted due to the overuse of memory. So, it may be fast but unsuitable for operating on huge sequence of data elements.
Although streams may seem somewhat similar to collections, there are many significant differences.
Unlike collections, streams are not built to store data. They are used on demand to pull data elements from the data source and pass it to the pipeline to proceed with further processing. Collection, on the other hand, is an in-memory data structure. That means data elements must exist in-memory to execute add/fetch an operation on it. In a way, a stream is more concerned with the flow of data , rather than the store of data , which is the main idea of the collections.
Because streams pull data on demand, it is possible to represent a sequence of infinite data elements. For example, stream operations can be plugged in to a data source that generates infinite data, such as reciprocating streams in a I/O channel. Collection, on the other hand, is limited due to the use of an in-memory store.
Both streams and collections operate on a number of elements, so the requirement of iteration is obvious. A stream is based on internal iteration; a collection is based on external iteration. Let's illustrate this with the help of an example. A simple aggregate operation on a collection can be as follows:
The code uses a for-loop to iterate over the list of elements.
Observe the following code. There is no external iteration, although iteration is applied internally and, surprisingly, stream operation can be written with marked brevity by using lambda.
The sequential structure of the external iteration is not suitable for parallel execution through multiple threads. Java provides an excellent Fork/Join framework library to leverage the use of modern multiple core CPUs. But, using this framework is not that simple, especially for beginners. Streams, however, simplify some parallel execution functionality, such as the preceding code can be written as follows for parallel execution.
In this scenario, the stream is not only using internal iteration but also using multiple threads to do the filtering, multiplication, and summing operations in parallel. Using internal iteration, however, does not mean they cannot be iterated externally. The following external iteration is equally possible.
Stream-related classes and interfaces can be found in the java.util.stream package. They are hierarchically arranged in the following manner.
Figure 1: Arrangement of the java.util.stream package
All sub interfaces of a stream have BaseStream as their base interface, which however is inherited from Autocloseable. Also, the package contains two classes—Collectors and StreamSupport—along with a few builder interfaces such as DoubleStream.builder , IntStream.builder , LongStream.builder , Stream.builder , and a Collector interface. In practice, streams are rarely used without a collection as their data source. They mostly go hand-in-hand in Java programming. Refer to the Javadoc for an elaborate description on each of the interfaces and classes in the hierarchy.
The iterate() method of a stream is quite versatile and can be used in many ways. The method takes two arguments: a seed and a function. Seed refers to the first element of the stream and the second element is obtained by applying the function to the first element and so on. Suppose we want to find first ten odd natural numbers; we may write the following:
This will print 1 3 5 7 9.
If we want to skip the first five and then print next five odd natural numbers, we may write the code as:
Now, it will print 11 13 15 17 19.
We can generate some random numbers with the generate() method as follows:
If we want a random integer, we may write:
Java 8 has introduced many classes that return their content as a stream representation. The char() method in the CharSequence is an example.
This prints first character of the each word in the sentence: PEMDAS.
Streams can be directly obtained from the Arrays class as follows:
Beginning Java 8 Language Features , Kishori Sharan, Apress.
This is just the beginning; there can be numerous such examples. Stream APIs are excellent in minimizing the toil in programming, sometimes re-inventing the same wheel. When the API codes accompanies lambda, the result is strikingly terse. Many a line of code can be compressed within a single line. This article is an attempt to give a glimpse of what stream APIs are about and was never intended to be a comprehensive guidance. Future articles will explore many of its practical usages in more detail.

 

 64 

Exploring the Java String Tokenizer

String tokenization is a process where a string is broken into several parts. Each part is called a token. For example, if "I am going" is a string, the discrete parts—such as "I" , "am" , and "going" —are the tokens. Java provides ready classes and methods to implement the tokenization process. They are quite handy to convey a specific semantics or contextual meaning to several individual parts of a string. This is particularly useful for text processing where you need to break a string into several parts and use each part as an element for individual processing. In a nutshell, tokenization is useful in any situation where you need to disorganize a string into individual parts; something to achieve with the part for the whole and whole for the part concept. This article provides information for a comprehensive understanding of the background concepts and its implementation in Java.
A token or an individual element of a string can be filtered during infusion, meaning we can define the semantics of a token when extracting discrete elements from a string. For example, in a string say, "Hi! I am good. How about you? ", sometimes we may need to treat each word as a token or, at other times a set of words collectively as a token. So, a token basically is a flexible term and does not necessarily meant to be an atomic part, although it may be atomic according to the discretion of the context. For example, the keywords of a language are atomic according to the lexical analysis of the language, but they may typically be non-atomic and convey different meaning under a different context.
The tokens are:
Now, if we change the code to the following:
The tokens are:
Observe that the StringTokenizer class contains three constructors, as follows: (refer to the Java API Documentation)
when we create a StringTokenizer object with the second constructor, we can define a delimiter to split the tokens as per our need. If we do not provide any, space is taken as a default delimiter. In the preceding example, we have used ". " (dot/stop character) as a delimiter. Note that the delimiting character itself is not taken into account as a token. It is simply used as a token separator without itself being a part of the token. This can be seen when the tokens are printed in the example code above; observe that ". " is not printed.
So, in a situation where we want to control whether to count the delimited character also as a token or not, we may use the third constructor. This constructor takes a boolean argument to enable/disable the delimited character as a part of the token. We also can provide a delimiting character later while extracting tokens with the nextToken(String delim) method.
We may also use delimited character as "
" to mean space, newline, carriage return, and line-feed character, respectively.
Accessing individual tokens is no big deal. StringTokenizer contains six methods to cover the tokens.
They are quite simple. Refer to the Java API Documentation for details about each of them.
The split method defined in the String class is more versatile in the tokenization process. Here, we can use Regular Expression to break up strings into basic tokens.
According to the Java API Documentation:
" StringTokenizer is a legacy class that is retained for compatibility reasons although its use is discouraged in new code. It is recommended that anyone seeking this functionality use the split method of String or the java.util.regex package instead. "
The preceding example with StringTokenizer can be rewritten with the string split method as follows:
Output:
To extract the numeric value from the string below, we may change the code as follows with regular expression.
As we can see, the strength of the split method of the String class is in its ability to use Regular Expression. We can use wild cards and quantifiers to match a particular pattern in a Regular Expression. This pattern then can be used as the delimitation basis of token extraction.
Java has a dedicated package, called java.util.regex , to deal with Regular Expression. This package consists of two classes, Matcher and Pattern , an interface MatchResult , and an exception called PatternSyntaxException.
Regular Expression is quite an extensive topic in itself. Let's not deal with is here; instead, let's focus only on the tokenization preliminary through the Matcher and Pattern classes. These classes provide supreme flexibility in the process of tokenization with a complexity to become a topic in itself. A pattern object represents a compiled regular expression that is used by the Matcher object to perform three functions, such as:
For tokenization, the Matcher and Pattern classes may be used as follows:
Output:
String tokenization is a way to break a string into several parts. StringTokenizer is a utility class to extract tokens from a string. However, the Java API documentation discourages its use, and instead recommends the split method of the String class to serve similar needs. The split method uses Regular Expression. There are a classes in the java.util.regex package specifically dedicated to Regular Expression, called Pattern and Matcher. The split method, though, uses Regular Expression; it is convenient to use the Pattern and Matcher classes when dealing with complex expressions. Otherwise, in a very simple circumstance, the split method is quite convenient.

 

 65 

Streamline Your Understanding of the Java I/O Stream

The Java I/O stream library is an important part of everyday programming. The stream API is overwhelmingly rich, replete with interfaces, objects, and methods to support almost every programmer's needs. In view of providing every need, the stream library has become a large collection of methods, interfaces, and classes with a recent extension into a new package called NIO.2 (New I/O version 2). It is easy to be lost among the stream implementation, especially for a beginner. This article shall try to provide some clue to streamline your understanding of I/O streams APIs in Java.
Stream literally means continuous flow, and I/O stream in Java refers to the flow of bytes between an input source and output destination. The type of sources or destination can be anything that contains, generates, or consumes data. For example, it may be a peripheral device, a network socket, a memory structure like an array, disk files, or other programs. After all, bytes are bytes; reading data sent from a server network stream is no different than reading a local file. Similar is the case for writing data. The intriguing part of Java I/O is its unique approach, very different from how I/O is handled in C or C++. Although the data type may vary along with I/O endpoints, the fundamental approach of the methods in output and input stream is same all throughout Java APIs. There will always be a read method for the input stream and a write method for the output stream.
After the stream object is created, we almost can ignore the intricacies involved in realizing the exact details of I/O processing. For example, we can chain filter streams to either an output stream or an input stream, and modify the data in the process of a read or write operation subsequently. The modification can be like applying encryption or compression or simply provide methods to convert data into other formats.
The readers and writers, for example, can be chained to an input and output stream to realize character streams rather than bytes. Readers and writers can handle a variety of character encoding such as multi byte Unicode characters (UTF-8).
Thus, a lot goes on behind the scenes, even if it is seemingly a simple I/O flow from one end to another. Implementing them from scratch is by no means simple and needs to go through the rigor of extensive coding. Java Stream APIs handle these complexities, giving developers an open space to concentrate on their productive ends rather than brainstorm on the intricacies of I/O processing. One just needs to understand the right use of the API interfaces, objects, and methods and let it handle the intricacies on their behalf.
The classes defined in the java.io package implements Input/Output Stream, File, and Serialization. File is not exactly a stream, but stream operations are the means to achieve file handling. File actually deals with file system manipulation, such as read/write operations, manipulating their properties, disk access, permissions, subdirectory navigation, and so forth. Serialization, on the other hand, is the process of persisting Java objects into a local or remote machine. Complete delineation is out of scope of this article; instead, here we focus only on the I/O streaming part. The base class for I/O streaming is the abstract classes InputStream and OutputStream , and later these classes are extended to to have some added functionality. They can be categorized intuitively as follows.
Figure 1: The Java IO Stream API Library
Byte Stream classes are mainly used to handle byte-oriented I/O. It is not restricted to any particular data type, though, and can be used with objects including binary data. The data is translated into 8-bit bytes for I/O operations. This makes byte stream classes suitable for I/O operations where a specific data type does not matter and can be dealt with in binary form as well. Byte Stream classes are mainly used in network I/O such as socket or binary file operation, and so on. There are many Byte Stream classes in the library; all are the extension of an abstract class called InputStream for input streaming and OutputStream for output streaming. An example of the concrete implementation of byte stream classes is:
Character Stream deals with Unicode characters rather than bytes. Sometime the character sets used locally are different, non-Unicode. Character I/O automatically translates a local character set to Unicode upon I/O operation without extensive intervention of the programmer. Using Character Stream is safe for future upgrades to support Internationalization even though the application may use a local character set such as ASCII. The character stream classes make the transformation possible with very little recoding. Character stream classes are derived from abstract classes called Reader and Writer. For example, the character stream reader that handles the translation of character to bytes and vice versa are:
Sometimes, the data needs to be buffered in between I/O operations. For example, an I/O operation may trigger a slow operation like a disk access or some network activity. These expensive operations can bring down overall performance of the application. As a result, to reduce the quagmire, Java platform implements a buffered (buffer=memory area) I/O stream. On invocation of an input operation, the data first is read from the buffer. If no data is found, a native API is called to fetch the content from an I/O device. Calling a native API is expensive, but if the data is found in the buffer, it is quick and efficient. Buffered stream is particularly suitable for I/O access dealing with huge chunks of data.
Data streams are particularly suitable for reading and writing primitive data to and from streams. The primitive data type values can be a String or int, long, float, double, byte, short, boolean, and char. The direct implementation classes for Data I/O stream are DataInputStream and DataOuputStream , which implements DataInput and DataOutput interfaces apart from extending FilterInputStream and FilterOutputStream , respectively.
As the name suggests, Object Stream deals with Java objects. That means, instead of dealing with primitive values like Data Stream objects, Object Stream performs I/O operations on objects. Primitive values are atomic, whereas Java objects are composite by nature. The primary interfaces for Object Stream are ObjectInput and ObjectOutput , which are basically an extension of the DataInput and DataOutput interfaces, respectively. The implementation classes for Object Stream are as follows.
As Object Stream is closely associated with Serialization. The ObjectStreamConstants interface provides several static constants as stream modifiers for the purpose.
Refer to Java Documention for specific examples of each stream type.
Following is a rudimentary hierarchy of Java IO classes.
Figure 2: A rudimentary hierarchy of Java IO classes
Input stream classes are derived from the abstract class java.io. InputStream. The basic operations of this class are as follows:
All output stream classes are the extension of the abstract class java.io. OutputStream. It contains the following variety of operations:
It may seem overwhelming at the beginning, but observe that no matter which extension classes you use, you'll end up using these methods for I/O streaming. For example, ByteArrayOutputStream is a direct extension of the OutputStream class; you will use these methods to write into an extensible array. Similarly, FileOutputStream writes onto a file, but internally it uses native code because "File" is a product of the file system and it completely depends upon the underlying platform on how it is actually maintained. For example, Windows has a different file system than Linux.
Observe that both the OutputStream and InputStream provide a raw implementation of methods. They do not bother about the data formats we want to use. The extension classes are more specific in this matter. It may happen that the supplied extension classes are also insufficient in providing our need. In such a situation, we can customize our own stream classes. Remember, the InputStream and OutputStream classes are abstract, so they can be extended to create a customized class and give a new meaning to the read and write operations. This is the power of polymorphism.
Filter Streams, such as PushBackInputStream and PushbackOutputStream and other sub extensions, provide a sense of customized implementation of the stream lineage. They can be chained to receive data from a filtered stream to another data packet along the chain. For example, a compressed network stream can be chained to a BufferedInputStream and then to a compressed data through CipherInputStream to GZIPInputStream and then to a InputStreamReader to ultimately realize the actual data.
Refer to the Java API documentation for specific details on the classes and methods discussed above.
The underlying principles of stream classes are undoubtedly complex. But, the interface surfaced through the Java API is relatively simple enough to ignore the underlying details. Focus on these four classes: InputStream , OutputStream , Reader , and Writer. This will help to get a grip on the APIs initially and then use a top-down approach to learn its extension. I suppose this is the key to streamline your understanding of the Java I/O stream. Happy learning!

 

 66 

Testing Controllers in Laravel with the Service Container

By Terry Rowland , Senior Backend Web Developer at Enola Labs.
Automated testing is critical to my method of development. When I first started off using Laravel, the way it did dependency injection via the constructor and the method for controllers was like magic to me and caused quite a bit of confusion in testing. So, with this post, I hope to clear up some of that confusion by explaining how the controller and Service Container work together and how to leverage the container for testing purposes.
I'm going to be showing all my examples in Laravel 5.2.31 and I'll be using Laravel Homestead (Vagrant) to build everything. Here is a link to the repository where you can grab the code. As a side note, if you are familiar with Laravel, don't run the migrations. In this example, I purposely want errors to occur to show we shouldn't be hitting the DB.
First, I will create the route I want to use in the app/routes.php file.
Now, I'm going to use a repository, below, specifically for example purposes, but I wouldn't suggest you use one in this case. Typically, repositories are reserved for more complex queries and logic, so using one here would more than likely be overkill. So, let's create a folder under app, called "Repositories", and place the following code in the file named "UserRepository.php".
This code will work, but I don't plan on using it. I'll also say that this code would be tested if it were going into one of my applications, but for the purpose of this, I want to make sure the "interface" and intention of the class is fleshed out first.
Next, let's build the controller we will be using for the route we created. In the app/Http directory, create a file named UsersController.php and place the following code in it:
This is simple code here—just an index function plus leveraging Laravel Service Container to get a built-out UserRepository so we can return its results.
Next, I will jump over to my test and start the process of building this out. My focus here is on creating the basic shell of what I know is minimally needed to make everything work.
In the Vagrant VM, and in the application directory, if I were to run the command "phpunit", I would actually (attempt to) hit a database. You would get a green, but behind the scenes you are actually getting an error that was swallowed up. If you would like to see the error, drop in this chunk of code:
after the:
This is expected. I purposely did NOT run the migrations, as mentioned earlier, but if you look at the error (or look in the storage/logs/laravel.log), you will see there's a query exception about the users table not existing. So, with this information we KNOW the repository is actually being used and it's actually working—attempting to hit the DB with a query on the users table.
Be sure to remove that dd code because it will no longer be necessary and will interfere with our test later.
So, now we can work up a little more code in the test:
Let me break this down a little.
I like being explicit in my tests, so I added the mockedUserRepo variable for clarity and type hinting. For example, down in the testing function when I use the arrow in my IDE (PHPStorm), it shows all the possible functions I can use from the repository AND the mock because of how I used the pipe character (|) in the comment.
Second, I built up a setup function. This function must be public and it's VERY important to call the parent::setup() because if we don't, the Laravel application will not be built and we will get several errors.
Third, I created a mocked version of the repository with the code $this->mockedUserRepo = Mockery::mock(UserRepository::class), then assigned it to the Service Container as the same name (the first parameter—"UserRepository::class") with the code app()->instance(UserRepository::class, $this->mockedUserRepo).
This tells the Service Container to create an entry in the container with the "name" of "App\Repositories\UserRepository", and assign the given class to that name. In this case, it's the mocked UserRepository. Now, even if the name "App\Repositories\UserRepository" already exists in the container, it will replace it with the mocked version. Also note, this will only happen for each test. So, if I made a new test and didn't mock the repository, it would go back to hitting the database.
Finally, we can add the code that is going to do the "checking" that we are using the mock properly:
If you aren't familiar with Mockery , it's okay; you can learn more here. But, what's great about Mockery is that it's super simple to read. Basically, we should expect the all function to be called once, with no arguments given to the function and then return null as a result of that "mocked" call.
Now, if we run "phpunit" in the terminal, we should get something similar to this:
Figure 1: The result of running "phpunit" in the terminal
Notice there are 0 assertions; this is expected, but a little confusing. There are assertions going on, but only in Mockery at this point. If you were to comment out the
you would see Mockery bark, saying "Method Mockery_0_App_Repositories_UserRepository::all() does not exist on this mock object". This would be swallowed up again, but by using the method mentioned earlier, using
you can see it. And, as one last precaution of not getting one of these silent errors, you can replace:
with:
And now, any time there's a "break" in the test, this will catch it because we have the mock returning null! When I first picked up Laravel, this threw me for a loop. I hope I was able to help demystify some of Laravel's magic in the controllers.
P. S. The method will also work just the same if the constructor is used for injection. You can try it by replacing your controller code with this code:
Good luck out there!
Terry Rowland is a Senior Backend Web Developer at Enola Labs , a custom Web and mobile application development company located in Austin, Texas.
*** This article was contributed for exclusive publication on this site. ***

 

 67 

The Top Ten Ways to Be a Great ScrumMaster

By Zubin Irani , CEO of cPrime
Having worked in the Agile world for more than 10 years now, I have seen teams succeed, fail and everything in between—often largely based on the competency of the ScrumMaster and his or her ability to manage teams to project completion.
Following are the top ten ScrumMaster do's and don'ts derived from watching more than 250 Agile projects.
It's fun to be the expert, but that isn't the ScrumMaster's job. A good ScrumMaster lets everyone else shine and focuses on making each member of the team successful. Achieving this requires listening over speaking.
No matter how obvious the problem is, investigate further before commenting. You'll be surprised how often your "obvious" conclusion was wrong—and you'll be glad you kept your mouth shut.
Agile and the ScrumMaster position isn't about you. It's about the Team. Focus on serving their needs, above all else.
Conflict can explode with little warning. When it does, you have to find and implement an effective solution. Stay calm, and focus on the facts. Keep bringing people back to the problem, and away from placing blame and emotions.
These individuals often don't know they are causing a problem, and most of the time they will pay attention and heed your guidance if they respect you.
You can't let one lemon sour the Team. Talk to the Team member's manager about moving the problematic person to a more appropriate Team or position, and then talk to the Team member. Start with, "You don't seem to be happy here. It seems to me that it might work better if you…. "
Most people will do it once, and learn. A second major failure gets a warning. If they hit three, it's time to move on.
Use your authority in the most diplomatic way possible, but use it when the need exists. You are supposed to make everything work, which means you enforce the process. Do your job. When you get pushback, diplomatically remind people that some things are in your area of authority, and you are making the call.
It's important to be objective and direct with team members and friendship can influence how you respond or make decisions, which may create resentment amongst other team members. Build a relationship with other ScrumMasters and Agile practitioners to provide an outlet as well as get unbiased input to help you tackle tough problems.
You have responsibilities to your Team, to your company, to your customers, and to your conscience. There is no rule book to fall back on when these responsibilities collide. Strive for win-win solutions when you can, and strive for the best fallback solutions when win-win isn't possible. Recognize situations where you cannot accomplish anything by pushing. Ask yourself if your self-respect is worth losing your job, and then make the right decision.

 

 68 

Serverless Architectures on AWS: Monitoring Costs

By Peter Sbarski with Sam Kroonenburg for Manning Publishing
This article was excerpted from the book Serverless Architectures on AWS.
CloudWatch is an AWS component for monitoring resources and services running on AWS, setting alarms based on a wide range of metrics, and viewing statistics on the performance of your resources. When you begin to build your serverless system, you are likely to use logging more than any other feature of CloudWatch. It will help to track and debug issues in Lambda functions, and it's likely that you will rely it on for some time. Its other features, however, will become important as your system matures and goes to production. You will use CloudWatch to track metrics and set alarms for unexpected events.
Receiving an unpleasant surprise in the form of a large bill at the end of the month is disappointing and stressful. CloudWatch can create billing alarms that send notifications if total charges for the month exceed a predefined threshold. This is useful, not only to avoid unexpectedly large bills, but also to catch potential misconfigurations of your system.
For example, it is easy to misconfigure a Lambda function and inadvertently allocate 1.5GB of RAM to it. The function might not do anything useful except wait for 15 seconds to receive a response from a database. In a very heavy-duty environment, the system might perform 2 million invocations of the function a month costing a little over $743.00. The same function with 128MB of RAM would cost around $56.00 per month. If you perform cost calculations up front and have a sensible billing alarm, you will quickly realize that something is going on when billing alerts begin to come through.
Follow these steps to create a billing alert:
Figure 1: The preferences page allows you to manage how invoices and billing reports are received.
Figure 2: It's good practice to create multiple billing alarms to keep you informed of ongoing costs.
Services such as CloudCheckr can help to track costs, send alerts, and even suggest savings by analyzing services and resources in use. CloudCheckr comprises several different AWS services, including S3, CloudSearch, SES, SNS, and DynamoDB (figure 3). It is richer in features and easier to use than some of the standard AWS features. It is worth considering for its recommendations and daily notifications.
Figure 3: CloudCheckr is useful for identifying improvements to your system but the good features are not free.
AWS also has a service called Trusted Advisor that suggests improvements to performance, fault tolerance, security, and cost optimization. Unfortunately, the free version of Trusted Advisor is limited, so if you want to explore all of the features and recommendations it has to offer, you must upgrade to a paid monthly plan or access it through an AWS enterprise account.
Cost Explorer (figure 4) is a useful, albeit high-level, reporting and analytics tool built in to AWS. You must activate it first by clicking your name (or the IAM user name) in the top right-hand corner of the AWS console, selecting Cost Explorer for the navigation pane, and then enabling it. Cost Explorer analyzes your costs for the current month and the past four months. It then creates a forecast for the next three months. Initially, you may not see any information, because it takes 24 hours for AWS to process data for the current month. Processing data for previous months make take even longer. More information about the Cost Explorer is available at http://amzn.to/1KvN0g2 .
Figure 4: The Cost Explorer tool allows you to review historical costs and estimate what future costs may be.
The Simple Monthly Calculator is a web application developed by Amazon to help model costs for many of its services. This tool allows you to select a service on the left side of the console and then enter information related to the consumption of that particular resource to get an indicative cost. Figure 5 shows a snippet of the Simple Monthly Calculator with an estimated monthly cost of $650.00. That estimate is mainly of costs for S3, CloudFront, and the AWS support plan. It is a complex tool and it's not without usability issues, but it can help with estimates.
You can click common customer samples on the right side of the console or enter your own values to see estimates. If you take the Media Application customer sample, something that could serve as a model for 24-Hour Video , it breaks down as follows:
Figure 5: The Simple Monthly Calculator is a great tool to work out the estimated costs in advance. You can use these estimates to create billing alarms at a later stage.
The cost of running serverless architecture often can be a lot less than running traditional infrastructure. Naturally, the cost of each service you might use will be different, but we can have a look at what it takes to run a serverless system with Lambda and the API Gateway.
Amazon's pricing for Lambda is based on the amount of requests, duration of execution, and the amount of memory allocated to the function. The first one million requests are free, with each subsequent million charged at $0.20. Duration is based on how long the function takes to execute, rounded up to the nearest 100ms. Amazon charges in 100ms increments while also taking into account the amount of memory reserved for the function.
A function created with 1GB of memory will cost $0.000001667 per 100ms of execution time, whereas a function created with 128MB of memory will cost $0.000000208 per 100ms. Note that Amazon prices may differ depending on the region and that they are subject to change at any time.
Amazon provides a perpetual free tier with 1 million free requests and 400,000 GB-seconds of compute time per month. This means that a user can perform a million requests and spend an equivalent of 400,000 seconds running a function created with 1GB of memory before they have to pay.
As an example, consider a scenario where you have to run a 256MB function five million times a month. The function executes for two seconds each time. The cost calculation follows:
The total cost of running Lambda in the above example is $35.807. The API Gateway pricing is based on the number of API calls received and the amount of data transferred out of AWS. In US East, Amazon charges $3.50 for each million API calls received and $0.09/GB for the first 10TB transferred out. Given the above example and assuming that monthly outbound data transfer is 100GB a month, the API Gateway pricing is as follows:
The API Gateway cost in this example is $26.50. The total cost of Lambda and the API Gateway is $62.307 per month. It's worthwhile to attempt to model how many requests and operations you may have to handle on an ongoing basis. If you expect 2M invocations of a Lambda function that only uses 128MB of memory and runs for a second, you will pay approximately $0.20 month. If you expect 2M invocations of a function with 512MB of RAM that runs for five seconds, you will pay a little more than $75.00. With Lambda, you have an opportunity to assess costs, plan ahead, and pay for only what you actually use. Finally, don't forget to factor in other services such as S3 or SNS, no matter how insignificant their cost may seem to be.

 

 69 

Tips for MongoDB WiredTiger Performance Tuning

By Dharshan Rangegowda , founder of ScaleGrid.io.
MongoDB 3.0 introduced the concept of pluggable storage engines. Currently, there are a number of storage engines available for Mongo: MMAPV1, WiredTiger, MongoRocks, TokuSE, and so forth. Each engine has its own strengths and you can select the right engine based on the performance needs and characteristics of your application.
Starting with MongoDB 3.2.x, WiredTiger is the default storage engine. WiredTiger is the most popular storage engine for MongoDB and marks a significant improvement over the existing default MMAPv1 storage engine in the following areas:
In the rest of this article, I'll present some of the parameters you can tune to optimize the performance of WiredTiger on your server.
The size of the cache is the single most important knob for WiredTiger. By default, MongoDB 3.x reserves 50% (60% in 3.2) of the available memory for its data cache. Although the default works for most applications, it is worthwhile to try tuning this number to achieve the best possible performance for your application. The size of the cache should be big enough to hold the working set of your application.
Figure 1: The WiredTiger cache size
MongoDB also needs additional memory outside of this cache for aggregations, sorting, connection management, and the like, so it is important to make sure you leave MongoDB with enough memory to do its work. If not, there is a chance MongoDB can get killed by the OS Out of memory (OOM) killer.
The first step is to understand the usage of your cache with the default settings. Use the following command to get your cache usage statistics:
Here is an example of output from calling the WiredTiger cache command:
The first number to look at is the percentage of the cache that is dirty. If the percentage is high, increasing your cache size might improve your performance. If your application is read heavy, you can also track the "bytes read into cache" parameter. If this parameter remains constantly high, increasing your cache size might improve your read performance.
The cache size can be changed dynamically without restarting the server by using the following command:
If you would like the custom cache size to be persistent across reboots, you also can add the config instruction to the conf file:
Figure 2: Read and write tickets
WiredTiger uses tickets to control the number of read/write operations simultaneously processed by the storage engine. The default value is 128 and works well for most cases. If the number of tickets falls to 0, all subsequent operations are queued, waiting for tickets. Long-running operations might cause the number of tickets available to decrease, reducing the concurrency of your system. For example, if your read tickets are decreasing, there is a good chance that there are a number of long running unindexed operations. If you would like to find out which operations are slow, there are third-party tools available. You can tune your tickets up/down depending on the needs of your system and determine the performance impact.
You can check the usage of your tickets by using the following command:
Here is a sample output
You can change the number of read & write tickets dynamically without restarting your server by using the following commands:
Once you've made your changes, monitor the performance of your system to ensure that it has the desired effect.
Dharshan Rangegowda is the founder of ScaleGrid.io, where he leads products such as ScaleGrid, a MongoDB hosting and management solution to manage the lifecycle of MongoDB on public and private clouds, and Slow Query Analyzer, a solution for finding slow operations within MongoDB. He can be reached at @dharshanrg.
*** This article was contributed ***

 

 70 

The Value of Doing APIs Right: A Look at the SiriKit API Demoware

When Siri was first introduced, people thought it was much smarter than it actually is. I heard kids giggling for hours, asking it silly questions. In effect, Siri was good for executing Web searches by voice and giving sassy answers to questions about itself. Neat trick, but not very sophisticated. After a few months, most people quit using Siri because, honestly, it just wasn't that practically useful.
The Amazon Echo was widely mocked when it was introduced. Who is going to pay $200 for a speaker? It became a surprise smash hit, not because people needed another speaker but because it had an extensible API that allowed 3 rd party developers to code new capabilities for it. It quickly found multiple unserved niches, particularly in home automation. "Alexa, turn off the lights. " People who own Echos almost universally say they use it every single day and find it has become an integral part of their experience at home.
The core difference between these two experiences is the existence of an API. The Echo has thousands of 3 rd party developers thinking up new ideas for the platform and teaching it new skills, and Siri has Apple. A 3 rd party developer who wants to make their app work with Siri has no option other than to index their app and hope it comes up as a search result on a Siri voice search.
There was a brief glimmer of hope recently when Apple introduced SiriKit. Finally, Apple was going to make it possible for 3 rd party developers to integrate their apps with Siri! Not so fast, enterprising developers… SiriKit only supports about a dozen canned interactions. They support Ride Booking (for example, book an Uber), person to person payments (Send $20 to a friend on Venmo), starting and stopping a workout, and some basic carplay commands. Although this is some progress, this canned set of actions merely opens up a handful of possibilities for Siri. Apple is still a first-class citizen when it comes to integrating their own apps with Siri and the 3 rd party marketplace is relegated to 3 rd class citizens in steerage.
Many of the limitations on integration with virtual assistants boils down to privacy concerns. Google Now reads all of my Gmail messages to provide me with helpful information. I don't want every app I install on my phone to start reading my email, too.
As a result of these privacy concerns, the better virtual assistant APIs are currently limited to being able to register your app for action commands. Google Voice Actions, Cortana, and Amazon all allow you to define phrases that your application can execute on. This is a good start and it allows for a reasonable level of integration with these virtual assistant platforms.
Being able to register for context is half of the battle. The platforms with action APIs will allow you to register for a command like, "Send flowers to Mom," and activate your flower ordering app. The problem is that the app doesn't know who your Mom is even though Google does. The user's intent in this case is clearly to share your mother's name and address with the flower ordering app.
To make virtual assistants truly useful for end-users, these platforms need a way to integrate with 3 rd party applications that include context without putting people's data at risk. I would propose that this could be done by allowing apps a richer method of registering not only the action commands they can respond to but the context they need to deliver on the user's action.
For example, you could register your car insurance company as subscribed to topics about insurance, cars, and household budgeting. Within each of these topics, you would need to define the moments in natural language terms, like "If the user is in a car accident" would define the broad topic areas that are relevant to your application. If these topic areas are triggered, the virtual assistant platform could pass a pre-defined set of context information that is relevant to this experience, such as the type of car being considered for purchase. Within these topics, your application could define its more specific actions that it can handle using that general context.
Air bags deployed, insurance assistant can proactively pipe in and ask if you'd like a claims agent to meet you or, in the case of Google Now, put a card at the top of your list with a button to summon an insurance agent.
Real magic can happen if virtual assistants can start allowing 3 rd parties to collaborate together to deliver more value to the customer. For example, in a household budgeting scenario, multiple apps could collaborate to provide more information than any one company could do by themselves. For example, your bank, credit card company, wealth advisor, insurance, cable, telephone, and so forth, all have a piece of your household's budgetary picture. The problem then arises with making all of these companies behave more in the interest of the user than themselves.
Each company is incented to push themselves to the forefront. The insurance company wants to sell car insurance, the wealth management company wants you to put more money under their management, and the cable company wants you to expand your channel line-up. If you asked your assistant to help you understand your budget, each of these providers screaming at you to sign up for more services would hardly be helpful.
As a result of the need to drive this collaboration, virtual platforms will need to evolve to allow 3 rd party applications to describe the services they can perform in a situation like this. The virtual assistant can provide the appropriate context and the 3 rd party application can describe what they can do for that context. The virtual assistant then will need to make the decision as to which of the various 3 rd party applications has the most relevant input to the current need.
To create a true virtual assistant platform that can unlock the power of the entire marketplace, 3 rd party applications need:
This could potentially require more abstract reasoning than is available under the hood of the simpler assistants like Siri currently can muster. The more advanced recognition systems like Watson would have no trouble assembling these pieces.
It's past time to open up virtual assistant APIs. New entrants like Viv are going to eat the lunch of these closed platforms. Truly open APIs allow a marketplace of innovation that is broader than a dozen canned possibilities to create amazing, surprising, and memorable experiences.

 

 71 

What Is Jenkins?

If you never heard about Jenkins , or it is just something that you didn't understand exactly what is it useful for, this article is for you. In the next few minutes, we will have an overview of Jenkins meant to introduce you this comprehensive tool dedicated to automating any kind of project.
Basically, Jenkins is an open source project written in Java and dedicated to sustaining continuous integration practices ( CI ). The tasks that Jenkins can solve are related to project automation, or, more exactly, Jenkins is fully able to automate build, test, and integrate our projects. For example, in this article you will see how to chain GitHub->Jenkins->Payara Server to obtain a simple CI environment for a Hello World Spring-based application (don't worry, you don't need to know Spring).
So, let's delve a little in the Jenkins goals. We begin with the installation of Jenkins 2, continue with major settings/configurations, install specific plug-ins, and finish with a quick start example of automating a Java Web application.
In this article, we will assume the following:
To download Jenkins, simply access the official Jenkins Web site (https://jenkins.io/) and press the button labeled Download Jenkins , as seen in Figure 1:
Figure 1: Download Jenkins
We go for the weekly release, which is listed oin the right side. Simply expand the menu button from Figure 1 and choose the distribution compatible with your system (OS) and needs. For example, we will choose to install Jenkins under Windows 7 (64-bit), as you can see from Figure 2:
Figure 2: Select a distribution compatible with the system
Notice that, even if the name is 2.5.war , for Windows we will download a specific installer. After download, you should obtain a ZIP archive named jenkins-2.5.zip. Simply un-zip this archive in a convenient location on your computer. You should see a MSI file named jenkins.msi. So, double-click this file to proceed with the very simple installation steps. Basically, the installation should go pretty smoothly and should be quite intuitive; we installed Jenkins in the D:\jenkins 2.5 folder. At the end, Jenkins will be automatically configured as a Windows service and will be listed in the Services application, as in Figure 3:
Figure 3: Jenkins as a Windows service
Beside setting Jenkins as a service, you will notice that the default browser was automatically started, as shown in Figure 4:
Figure 4: Unlock Jenkins
Well, this is the self-explanatory login page of Jenkins, so simply act accordingly to unlock Jenkins. In our case, the initialAdminPassword was 9d9f510d8ef043e98f7c574b3ea8adc0. Don't bother about typing this password; simply use copy-paste. After you click the Continue button, you can see the page from Figure 5:
Figure 5: Install Jenkins plug-ins
Because we are using Jenkins for the first time, we prefer to go with the default set of plug-ins. Later on, we can install more plug-ins, so you don't have to worry that you didn't install a specific plug-in at this step. Notice that installing suggested plug-ins may take a while, depending on your Internet connection (network latency), so be patient and wait for Jenkins to finish this job for you. While this job is in progress, you should see a verbose monitoring that reveals the progress status, plug-ins names, and the dependencies downloaded for those plug-ins. See Figure 6:
Figure 6: Monitoring plug-ins installation progress
You can use this time to spot some commonly used plug-ins, such as Git, Gradle, Pipeline, Ant, and so forth.
After this job is done, it is time to set an admin user of Jenkins. You need to have at least an admin, so fill up the requested information accordingly (Figure 7):
Figure 7: Create the first Jenkins admin
If you press Continue as admin , Jenkins will automatically log you in with these credentials and you will see the Jenkins dashboard. If you press the Save and Finish button, you will not be logged in automatically and you will see the page from Figure 8:
Figure 8: Start using Jenkins
If you choose Save and Finish (or whenever you are not logged in), you will be prompted to log in via a simple form, as in Figure 9:
Figure 9: Log in to Jenkins as admin
After login, you should see the Jenkins dashboard, as in Figure 10:
Figure 10: Jenkins dashboard
So far, you have successfully downloaded, installed, and started Jenkins. Let's go farther and see several useful and common configurations.
To work as expected, Jenkins needs a home directory and implicitly some disk space. In Windows (on a 64-bit machine), by default, the Jenkins home directory ( JENKINS_HOME ) is the place where you have installed Jenkins. In our case, this is D:\jenkins 2.5. If you take a quick look into this folder, you will notice several sub-folders and files, such as the /jobs folder, used for storing jobs configurations; a /plugins folder, used for storing installed plug-ins or the jenkins.xml file containing some Jenkins configurations. So, in this folder, you will store Jenkins stores plug-ins, jobs, workspace, users, and so on.
Now, let's suppose that we want to modify the Jenkins folder from D:\jenkins 2.5 in C:\JenkinsData. To accomplish this task, we need to follow several steps:
By default, Jenkins will start on port 8080. In case that you are using this port for another application (for example, application servers as Payara, Wildfly, and the like), you will want to manually set another port for Jenkins. This can be accomplished by following these steps:
By default, Jenkins will use 256MB, as you can see in jenkins.xml. To allocate more memory, simply adjust the corresponding argument. For example, let's give ir 8192MB:
You also may want to adjust the perm zone or other JVM memory characteristics by adding more arguments:
Please find more Jenkins parameters here.
Winstone is part of Jenkins; therefore, you can take advantage of settings such as --handlerCountStartup (set the number of worker threads to spawn at startup; default, 5) or --handlerCountMax (set the max number of worker threads to allow; default,d 300).
Remember that when we have installed Jenkins we chosen the default set of plug-ins. Moreover, remember that we said that Jenkins allows us to install more plug-ins later from the dashboard. Well, it is time to see how to deal with Jenkins plug-ins.
To see what plug-ins are installed in your Jenkins instance, simply select the Manage Jenkins | Administration plugins | Installed tab. See Figure 15:
Figure 15: See the installed plug-ins
Installing a new plug-in is pretty simple. Select the Manage Jenkins | Administration plugins | Available tab. Locate the desired plug-in(s) (notice that Jenkins will provide a huge list of plug-ins, so you better use the search filter feature), tick the desired plug-in(s), and click one of the available options listed at the bottom of the page. Jenkins will do the rest for you. See Figure 16:
Figure 16: Install a new plug-in
For example, later in this article we will need to instruct Jenkins to deploy the application WAR on a Payara Server. To accomplish this, we can install a plug-in named Deploy Plugin. So, in the Available tab, we have used the filter feature and typed deploy. This will bring us, on screen, the plug-in as in Figure 17. (If you don't use the filter, you will have to manually search through hundreds of available plug-ins, which will be time-consuming.) Therefore, simply tick it and install it without restart:
Figure 17: Install Deploy plug-in
After installation, this plug-in will be listed under the Installed tab.
Before defining a job for Jenkins, it is a good practice to take a look to the global tool configuration ( Manage Jenkins | Global Tool Configuration ). Depending on what types of jobs you want to run, Jenkins needs to know where to find additional tools, such as JDK, Git, Gradle, Ant, Maven, and so forth. Each of these tools can be installed automatically by Jenkins once you tick the Install automatically checkbox. For example, in Figure 18, you can see that Jenkins will install Maven automatically:
Figure 18: Install Maven under Jenkins
But, if you already have Maven installed locally, you can un-tick the Install automatically checkbox and instruct Jenkins where to find Maven locally via the MAVEN_HOME environment variable. Either way, you have to specify a name to this Maven installation. For example, type Maven as the name and keep this in mind because you will need it later.
Each tool can be installed automatically, or you can simply instruct Jenkins where to find it locally via environment variables (for JDK, JAVA_HOME ; for Git, GIT_HOME ; for Gradle, GRADLE_HOME ; for Ant, ANT_HOME , and for Maven, MAVEN_HOME ). Moreover, each tool needs a name used to identify it and refer it later when you start defining jobs. This is useful when you have multiple installation of the same tool. In case that a required variable is not available, Jenkins will show this via an error message. For example, let's say that we decided to instruct Jenkins to use the local Git distribution. But, we don't have GIT_HOME set, so here it is what Jenkins will report:
Figure 19: Install Git under Jenkins
This means that we need to set GIT_HOME accordingly or choose the Install automatically option. Once you set GIT_HOME , the error will disappear. So, before assigning jobs to Jenkins, take your time and ensure that you have successfully accomplished global tool configuration. This is a very important aspect!
Because this is the first Jenkins job, we will keep it very simple. Practically, what we will do is to implement a simple CI project for a Hello World Spring application. This application is available here. Don't worry if you don't know Spring; it is not mandatory!
Furthermore, you have to link the repository (you can fork to this repository) to your favorite IDE (for example, NetBeans, Eclipse, and so on) in such a way that you can easily push changes to GitHub. How you can accomplish this is beyond this article's goal, but if you choose NetBeans, you can find the instructions here.
So, we are supposing that you have Jenkins installed/configured and the application is opened in your favorite IDE and linked to GitHub. The next thing to do is to install Payara Server with its default settings and start it. By default, it should start on port 8080 with admin capabilities on port 4848.
Our next goal is to obtain the following automation: at each three-minute interval, Jenkins will take the code from GitHub, compile it, and the resulted WAR will be deployed on Payara Server.
Open Jenkins in a browser and click New Item or Create a new job , as in Figure 20:
Figure 20: Create a new job in Jenkins
As you will see, there are several types of jobs (projects) available. We will choose the most popular one, which is freestyle project and we will name it HelloSpring :
Figure 21: Select a job type and name it
After you press the Ok button, Jenkins will open the configuration panel for this type of job. First, we will provide a simple description of the project, as in Figure 22 (this is optional):
Figure 22: Describe your new job
Because this is a project hosted on GitHub, we need to inform Jenkins about its location. For this, on the General tab, tick the GitHub project checkbox and provide the project URL (without the tree/master or tree/branch part):
Figure 23: Set the project URL
The next step consists of configuring the Git repository that contains our application in the Source Code Management tab. This means that we have to tick the Git checkbox and specify the repository URL, the credentials used for access, and the branches to build, as in Figure 24:
Figure 24: Configure Git repository
Further, let's focus on the Build Triggers tab. As you can see, Jenkins provides several options for choosing the moment when the application should be built. Most probably, you will want to choose the Build when a change is pushed to GitHub option, but for this we need to have a Jenkins instance visible on the Internet. This is needed for GitHub, which will use a webhook to inform Jenkins whenever a new commit is available. You also may go for the Poll SCM option, which will periodically check for changes before triggering any build. Only when changes to the previous version are detected, the build will be triggered. But, for now, we go for the Build periodically option, which will build the project periodically without checking for changes. We set this cron service to run at every three minutes:
Figure 25: Build project periodically
The schedule can be configured based on the instructions provided by Jenkins if you press the little question mark icon listed in the right of the Schedule section. By the way, don't hesitate to use those question marks whenever they are available because they provide really useful information.
To build the project, Jenkins need to know how to do it. Our application is a simple Maven Web application and pom.xml is in the root of the application. So, on the Build tab, select the Invoke top-level Maven targets option from the Add build step drop-box. Furthermore, instruct Jenkins about the Maven distribution (remember that we have configured a Maven instance under the name Maven earlier in the Global Tool Configuration section) and about the goals you want to be executed (for example, clean and package ):
Figure 26: Configure Maven distribution and goals
So far, so good! Finally, if the application is successfully built, we want to delegate Jenkins to deploy it on Payara Server (remember that we have installed the Deploy Plugin earlier, especially for this task). This is a post-build action that can be configured on the Post-Build Actions tab. From the Add post-build action drop-box, select the Deploy war/ear to a container item.
Figure 27: Add a post-build action
This will open a dedicated wizard where we have to configure at least the Payara Server location and the credentials for accessing it:
Figure 28: Configure Payara Server for deployment
Click the Save button and everything is done. Jenkins will report the new job on the dashboard:
Figure 29: The job was set
Now, you can try to fire a manual build or wait for the cron to run the build every three minutes. For a manual build, simply click the project name from Figure 29, and on Build Now , as in Figure 30:
Figure 30: Running a build now
Each build is listed under the Build History section and can be in one of three stages: in progress, success, or failure. You easily can identify the status of your builds:
Figure 31: Build status
Most probably, if the build failed, you want to see what just happened. For this, simply click the specific build and afterwards on Console Output , as in Figure 32:
Figure 32: Check build output
All you have to do is to provide access for writing in the C:\Windows\Temp folder via the Properties wizard:
Figure 34: Providing access for writing to the folder
If the build is successfully accomplished, the application is deployed on Payara Server and it is available on the Applications tab of the admin console. From there, you easily can launch it, as in Figure 35:
Figure 35: Run the application
It looks like our small project works like a charm! Further, do some modifications in the application, push it on GitHub, wait for the Jenkins cron to run, and notice how the modifications are reflected in your browser after refresh.
Well, to further the project, you can try to add more functionalities, like a JIRA account, GitHub webhook, and the like.

 

 72 

Cross-field Validation in JSF

You have to be aware that, from a Java /JSF perspective, there are several limitations in using Bean Validation: JSR 303. One of them involves the fact the JSF cannot validate the class or method level constraints (so called, cross-field validation), only field constrains. Another one consists of the fact the allows validation control on a per-form or a per-request basis, not on a per- UICommand or UIInput. In order to achieve more control, you have to be open to write boilerplate code, and to shape custom solutions that work only on specific scenarios.
In this article, we will have a brief overview of three approaches for achieving cross-field validation using JSF core and external libraries. We will pass through the approaches provided by:
Let's suppose that we have a simple form that contains two input fields representing the name and the e-mail of a Web site member or admin. Next to these inputs, we have two buttons, one with the label Contact Member and another one with the label Contact Admin. When the user clicks the first button, he will "contact" the specified Web site member, and when he clicks on the second button, he will "contact" the specified Web site admin. The form is as follows:
For a Web site member/admin, the name input should not violate any of the constraints defined in a group named MemberContactValidationGroup. Moreover, for a Web site member/admin, the email input should not violate any of the constrains defined in the AdminContactValidationGroup group. Even more, we have a constraint over email in the default group (applicable to members and admins).
Next, we should attach these constraints to the name and email inputs, but, we need to obtain the following functionality:
Finding a solution based on will end up in some boilerplate code, because it will require a "bunch" of tags, EL expressions, conditions, server-side code, and so forth. Most likely, at the end, our form will look like a total mess. Another approach is to redesign the application, and use two forms, one for members and one for admins.
Further, let's suppose that the provided email should always start with the name ( getEmail().startsWith(getName() ). This is basically a cross-field constraint that can be applied via a class level constraint. But, JSF doesn't support this kind of constraints, so you have to provide another solution (not related to Bean Validation), like placing the validation condition in the action method, or in the getters (if there is no action method). Multiple components can be validated by using with postValidate , or, if you need to keep the validation in Process Validations phase, maybe you want to use and a JSF custom validator.
The features brought by OmniFaces via the tag are exactly what we need to solve our use case. Although the standard only allows validation control on a per-form or a per-request basis, allows us to control bean validation on a per- UICommand or UIInput component basis.
For example, we can obtain the claimed functionality via , like this:
Listing 1: The complete application in named ValidateBean_1.
Now, let's discuss the class level validation. The does not provide anything related to bean validation, so we can "jump" directly to. Right from the start, you should know that supports an attribute named method , which indicates if this is a copy bean validation (default) or an actual bean validation :
In case of using copy bean validation, OmniFaces tries a suite of strategies for determining the copy mechanism. By default, OmniFaces comes with an interface ( Copier ) that is to be implement by classes that know how to copy an object, and provides four implementations (strategies) of it:
Besides these four implementations (strategies), OmniFaces comes with another one, named MultiStrategyCopier , which basically defines the order of applying the above copy strategies: CloneCopier , SerializationCopier , CopyCtorCopier , NewInstanceCopier. When one of these strategies obtains the desired copy, the process stops. If you already know the strategy that should be used (or, you have your own Copier strategy (for example, a partial object copy strategy), you can explicitly specify it via copier attribute (for example, copier="org.omnifaces.util.copier. CopyCtorCopier" ). In OmniFaces Showcase, you can see an example that uses a custom copier. Moreover, you can try to find out more details about Copier on the OmniFaces Utilities ZEEF page, OmniFaces Articles block, and Copy Objects via OmniFaces Copier API article.
Now, let's focus on our cross-field validation: ( getEmail().startsWith(getName() ). To obtain a class level constraint based on this condition, we need to follow several steps:
1. Wrap this constraint in a custom Bean Validation validator (for example, ContactValidator ).
2. Define a proper annotation for it (for example, ValidContact , used as @ValidContact ).
3. Annotate the desired bean (optionally, add it in a group(s)).
4. Use to indicate the bean to be validated via value attribute ( javax.el. ValueExpression that must evaluate to java.lang. Object ), and the corresponding groups (this is optional). Additionally, you can specify the actual bean validation, via the method attribute, and a Copier , via the copier attribute:
Listing 2: The complete application in named ValidateBean_2.
As you probably know, PrimeFaces comes with a very useful support for client side validation based on JSF validation API and Bean Validation. In this post, we will focus on Bean Validation, and say that this can be successfully used as long as we don't need cross-field validation or class level validation. This means that the validation constraints placed at the class level will not be recognized by PrimeFaces client side validation.
In this post, you can see a pretty custom solution, but pretty fast to implement to obtain a cross-field client side validation for Bean Validation using PrimeFaces. We have a user contact made of a name and an e-mail, and our validation constraint is of type: e-mail must start with name (for example, name@domain.com ):
To accomplish this task, we will slightly adapt the PrimeFaces custom client side validation .
First, we create a ValidContact annotation:
Further, in our bean we annotate the proper fields ( name and email ) with this annotation—we need to do this to indicate the fields that enter in cross-field validation; so, annotate each such field:
Now, we write the validator. Here, we need to keep the name until the validator gets the e-mail also. For this, we can use the faces context attributes, as below:
Now, we have to accomplish the client-side validation. Again, notice that we store the name into an array (you can add more fields here) and wait for the e-mail:
Finally, we ensure the presence of a ClientValidationConstraint implementation:
Done! The complete application is named PFValidateBeanCrossField.
JSF 2.3 will come with a new tag, named. As its name suggests, this tag enables class level validation. This tag contains two important attributes (but, more will be added):
This feature causes a temporary copy of the bean referenced by the value attribute.
Here is a brief example to ensure that the provided name and e-mail fields (contacts) are individually valid and also the e-mail start with that name (for example, valid: nick , nick_ulm@yahoo.com ).
The complete application is named JSF23ValidateWholeBeanExample (I've tested with Mojarra 2.3.0-m04 under Payara 4). You can download the sample code from here.

 

 73 

15 Amazing Mobile Apps for Aspiring Designers

A dedicated iPad app for designers who love working on the layout. One of the core tasks of Comp CC is to let the designers create print, Web, and mobile layouts.
Again, the one-tap sharing on Adobe cloud is supported by Comp CC. The add-on advantage of Adobe Comp CC is its intuitive drawing gestures. Even roughly drawn shapes are turned to crisp graphics.
Get Adobe Comp CC .
Comp CC supports vector shapes, colors, and text styles from Creative Cloud Libraries and Adobe Toolkit
Infinite Design provides an exceptional place for creating vector graphic designs. What's more, it gives designers an opportunity to design on an unrestrictive canvas with multiple layers to work on.
Looking for more liberty? Well, this app truly stands by its name—it comes with infinite canvas sizes.
Get Infinite Design app for Android .
Infinite Design's multiple layer designing magnifies a designer's imagination
The paper is not a dedicated design app, but still much more than what a designer can ever think of. It gives you a paper tableau on your smart devices. Simple. Make notes, draw sketches, create lists. Do anything and everything.
With its extensive functionality, it also matches up with the speed of your fingers; therefore, you can put anything down that inspires you on this paper.
To effectively utilize this app for optimal output, you can use any of the available tools like FiftyThree Pencil, Pogo Connect Smart Pen, and Just Mobile AluPen.
Get Paper for iPhone and iPad .
Paper by FiftyThree offers dynamic color ranges supporting various digital pens for sketching to perfection.
SketchBook provides an intuitive space for aspiring designers to draw, sketch, and paint their imagination on a digital canvas. The sheer brilliance is its ability to match the real world physical experience will dazzle you. It is possible to mimic the physical experience by using pencils, pens, markers, and brushes on paper.
As far as its offerings are concerned, it comes preloaded with 10 preset brushes, a synthetic pressure-sensitivity, and multiple layers of editing options. The layer editor has 3 to 16 blending modes. It also includes a tool for symmetry and transformation.
Get Autodesk SketchBook for iOS and Android Devices .
Autodesk Sketchbook helps organize your artwork in its Gallery with multiple view options.
Sketchworthy gives liberty to designers by providing a virtual notebook in their iOS devices. What makes Sketchworthy an important designing tool is its ability to capture anything from maps to Web pages, and photos, of course. Once captured, the designers have the liberty to choose from the variety of papers from the paper store.
Get SketchWorthy for iPhone and iPad .
Bundled with this app are packs for creating blueprints, graphs, to-do lists, planners, and much more.
Adobe Photoshop Sketch is a flawless, vector-based digital sketchpad with multiple uses. The Strokes' scalability comes with free-hand drawing supported by 64x zoom. It allows a designer to work on finer details.
Photoshop Sketch work on details, giving a platform to create complexity in an image. This picture complexity can get elevated by incorporating 10 drawing layers.
Designers get the freedom to add depth and dimension. Moreover, it comes loaded with a stock of high-resolution, royalty-free images.
Get Adobe Ideas for iPhone and iPad .
The sure shot delight for designers is its capability to integrate with the Adobe creative suite.
iDesign is one of the most active and precision-driven 2D vector design apps. Designers can make the best use of this app for making professional vector-based designs, including illustrations and technical drawings. iDesign comes equipped with sensor-active touch points. It gives the designer complete control over the design.
At its artistic utility level, iDesign works purely with lines. It gives you plenty of options to choose from, including adding end points, ellipses, fills, colors, and transparency.
Get iDesign for iOS and Android .
For designers, iDesign is like a muse as its advanced tools provides symmetry in editing options for specifics.
SwatchMatic comes with an amazing utility that every designer aspires to have. Let's put it straight—Swatchmatic is for capturing, combining, and sharing the colors you adore the most. It pumps life into static designs by allowing you to select colors from various places. You simply have to put the real-world object in front of your phone camera and voilà!! It lets you select and capture real-world color to the digital world.
Get SwatchMatic for Android .
Expressive freedom for the designer—it allows editing individual colors in the palette with easy sliders.
Writing on photographs was never this easy. Gone are the days of adjusting the text to fit the shape and size. The uniqueness of Pathon lies in letting you put text scrawls onto an arbitrary path.
Also, it gives plenty of options to select the text style, colors, and size.
Get PathOn app for iPhone and iPad .
The workability is simple. Just choose the picture, write text, and guide the path.
A fully-functional vector based sketchpad for all the iPhones and iPads. It is the app to use if you are almost a pro designer. It comes integrated with 11 preloaded drawing tools. Intaglio explores the best of the editing scope, apart from just designing.
Get Intaglio Sketchpad for iPhone and iPad .
The vector editing options comprise of group editing, layer editing, customizable pre-loaded graphics, and image morphing.
Nothing can be more amazing for aspiring artists than to bring his/her imagination onto canvas this easily. One just needs to drift their fingers briskly, and that is it. Although it seems like a fun-time app, it is possible to get the next big design idea while doodling your way through Doodle Buddy.
The UI is also one of the amazing factors while using Doddle Buddy. It is possible to undo your last stroke if you require changes in the design. To start over, you just need to shake the device.
Get Doodle Buddy for iOs.
It is possible to connect to a network and draw along with your friends online.
A boon for mobile app designers is this Marvel App. It lets you turn your design ideas into reality within just a few minutes. Marvel turns your sketches into an app demo. It is a simple, 3-step process where you just need to draw the screen on your paper, take a picture with the help of Marvel, and then just sync them.
Get MarvelApp for iOS and Android .
This app is a perfect gateway for aspiring designers to create their initial mobile app prototype.
A very simple app with great utility, iRuler app simply displays a virtual ruler on your device. It is an essential app for aspiring designers who want to take precise measurements on the go for real-world objects.
Get iRuler App for iPhone and iPad .
Designers can efficiently use their fingers to scroll this infinite ruler.
LooseLeaf sets you loose with your ideas. Aspiring designers run through many ideas on a day-to-day basis. With LooseLeaf, it becomes easy to craft a scratch anytime, anywhere, with ease.
It is with LooseLeaf a designer can draw diagrams dramatically faster. Moreover, it is easy to cut and crop the designs with scissors tool.
Get LooseLeaf for iPhone and iPad .
LooseLeaf is a no-frill design app for aspiring designers to set their hands free on this dry-erase board.
What is the design if it only sits on your mobile phone? Aspiring designers need to put their design to prospective clients so that designs get wings. Behance is an intuitive app platform that allows designers to put their portfolio over the Internet and share it with people.
Get Behance for iOS and Android .
So, here it is, the space to put your design to best use. Make a living out of your design—get noticed.
Well, these are just a few of the applications out of the infinite gamut. However, it then totally depends on the designer and the utility that he/she is looking for in a mobile app. The time has come to break the norm like the way design changed its platform over a period—from paper to computers; now is the time to sail through the mobile way.
Mobile technology has swept away a significant amount of the desktop and laptop market. Moreover, going with the same flow, it does not come up as a surprise if aspiring designers start using mobile applications as dedicated software in the near future. This way, one can seriously save on various resources like space and money.
By Shahid Abbasi
One of the most beautiful things about design is that it has the power to captivate your thoughts with the sheer vision of your eyes. Moreover, that is why we, at the Design Instruct, strive to drive the designing community ahead. With every breed of a new generation comes innovation in design and today's world is no different.
Every designer aspires to create a masterpiece someday. Technology provides one of the most amazing platforms. Moreover, especially now with mobile phones, it is possible to create stunning visuals.
Mobile apps offer wonderful tools to pull out your latent designing skills. I am here to share with you some of the most amazing mobile apps for designers.
Shahid Abbasi is a marketing consultant with Peerbits , one of the top iPhone app development companies. He creates highly polished iOS apps and also has expertise in Android app development. Shahid likes to keep busy with his team, and to provide top-notch mobility solutions for enterprises and startups.
Your name/nickname
Your email
Subject
(Maximum characters: 1200). You have characters left.

 

 74 

Elastic Leadership: Review the Code

By Roy Osherove
This article was excerpted from the book Elastic Leadership.
Robert Martin (Uncle Bob) has been a programmer since 1970. He is the Master Craftsman at 8th Light Inc, and the author of many books including The Clean Coder, Clean Code, Agile Software Development: Principles, Patterns, and Practices , and UML for Java Programmers. He is a prolific writer and has published hundreds of articles, papers, and blogs. He served as the Editor-in-Chief of the C++ Report, and as the first chairman of the Agile Alliance. Here is his advice for new software team leaders and my feedback on that.
One of the biggest mistakes that new software team leaders make is to consider the code written by the programmers as the private property of the author, as opposed to an asset owned by the team. This causes the team leaders to judge code based on its behavior rather than its structure. Team leaders with this dysfunction will accept any code so long as it does what it is supposed to do, regardless of how it is written.
Indeed, such team leaders often don't bother to read the other programmers' code at all. They satisfy themselves with the fact that the system works and divorce themselves from system structure. This is how you lose control over the quality of your system.
And, once you lose that control, the software will gradually degrade into an unmaintainable morass. Estimates will grow, defect rates will climb, morale will decline, and eventually everyone will be demanding that the system be redesigned.
A good team leader takes responsibility for the code structure as well as its behavior.
A good team leader acts as a quality inspector, looking at every line of code written by any of the programmers under their lead.
A good team leader rejects a fair bit of that code and asks the programmers to improve the quality of that code.
A good team leader maintains a vision of code quality.
They will communicate that vision to the rest of the team by ensuring that the code they personally write conforms to the highest quality standards and by reviewing all of the other code in the system and rejecting the code that does not meet those exacting standards.
As teams grow, good team leaders will recruit lieutenants to help them with this review and enforcement task. The lieutenants review all the code, and the team leader falls back on reviewing all the code written by the lieutenants and spot checking the code written by everyone else.
Code is a team asset, not personal property. No programmer should ever be allowed to keep their code private. Any other programmer on the team should have the right to improve that code at any time. And the team leader must take responsibility for the overall quality of that code.
The team leader must communicate and enforce a consistent vision of high quality and professional behavior.
Speaking from the influence forces point of view, Uncle Bob advocates that we influence the team by creating environmental rewards and punishments for writing good code (the team leader says it has to be done, or else…) which can definitely affect a positive outcome.
Here's an apparent paradox, though. You'd be hard-pressed to find any team leader that disagrees with any piece of this text, and at the same time it is extremely difficult to find team leaders who actually practice what they claim to preach.
This is only an appranet paradox, however. Once we look at things from the systems viewpoint, things begin to make more sense. A good way to look at the systems view is to think about the influence forces we just discussed to try to dissect why so many team leaders don't practice what they preach.
To start, let's choose one core behavior we'd like our team leader to practice:
-- "A good team leader acts as a quality inspector, looking at every line of code written by any of the programmers under their lead. "
Let's look at each force, and try to imagine a scene from a real-life "enterprise" organization setting.
OK, socially, when working with peers and colleagues, things seem to be getting a bit murky, and that team member has a good point: We are under some serious time pressure.
You could say we are in survival mode, so should we really refactor that code?
On top of this, other team leaders seem to be doing just fine (they do bitch quite a bit about the quality of the products, but hey, don't we all?) without this code quality looming over their heads.
So maybe the problem is more systematic? Let's look at the last two factors.
These last two points complete our "systems" perspective. They point to a serious flaw: The team leader doesn't have the incentive to do the right thing, or worse, has the incentive to do the wrong thing, or be berated by the managers.
Without solving this issue, as well as the social issues, it will be very difficult to see many team leaders taking that extra step towards the things they really believe in.
Uncle Bob is asking team leaders to influence the team in the right direction by changing environmental forces. But getting the team leaders to do this pushing might lead to environmental forces in the first place, which is one of the reasons why so many leaders today talk the talk, but don't really walk the walk.
What would you change in the place you work, at the system level, to enable team leaders to "do the right thing"?
What is the first step to making these changes happen? For example "I'll set up a meeting with the CTO about this" or "I will do a presentation to X folks about this" might be a good step, but your situation may need different steps first.

 

 75 

John Lewis CIO Paul Coby promoted to uber-CIO of John Lewis Partnership

John Lewis CIO Paul Coby has been promoted to CIO of the entire John Lewis Partnership, a role that will put him in overall charge of the IT at supermarket chain Waitrose, as well as the John Lewis department stores.
Coby's elevated role will put him in charge of delivering the IT not just to John Lewis's 47 department stores in the UK, but also Waitrose's 349 supermarket branches. He will also be in overall charge of the IT behind the company's logistics and supply-chain systems, as well as the technology in the company's warehouses, including the company's growing home delivery businesses.
The structure of the organisation will not immediately change, however, with John Lewis and Waitrose IT directors continuing to report to divisional managing directors, while the department stores group will recruit a new IT director in succession to promoted Coby.
Coby joined John Lewis from British Airways in 2011 , and talked to Computing about his strategy shortly after joining in an in-depth CIO interview. In 2013, the company revealed that it had passed the £1bn mark in annual web sales following a 40 per cent increase in sales generated via the JohnLewis.com website over the previous financial year.
Those sales were supported by a new logistics system, based on Oracle ATG and implemented before Christmas, that enables customers to order items up until 7pm for collection in-store after 2pm the next day. That signalled a more wide-ranging shift to Oracle at John Lewis.
In 2014, Coby embarked on an ambitious plan to replace some 50 legacy IT systems with Oracle E-Business Suite. The shift was intended to make John Lewis more responsive and flexible, as well as enabling it to more easily get a complete view of customers across all channels.
The company is currently mid-way through the rollout, which is slated for completion by the end of 2018.
The John Lewis Partnership also includes Broadband, Insurance, Opticians and Foreign Currency, as well as the core supermarket chain and department stores group.


Total 75 articles.
Created at 2016-07-05 18:00