Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-isomorphic-javascript
Sam Wood
26 Jan 2017
3 min read
Save for later

Why you should learn Isomorphic JavaScript in 2017

Sam Wood
26 Jan 2017
3 min read
One of the great challenges of JavaScript development has been wrangling your code for both the server- and the client-side of your site or app. Fullstack JS Devs have worked to master the skills to work on both the front and backend, and numerous JS libraries and frameworks have been created to make your life easier. That's why Isomorphic JavaScript is your Next Big Thing to learn for 2017. What even is Isomorphic JavaScript? Isomorphic JavaScript are JavaScript applications that run on both the client and the server-side. The term comes from a mathematical concept, whereby a property remains constant even as its context changes. Isomorphic JavaScript therefore shares the same code, whether it's running in the context of the backend or the frontend. It's often called the 'holy grail' of web app development. Why should I use Isomorphic JavaScript? "[Isomorphic JavaScript] provides several advantages over how things were done 'traditionally'. Faster perceived load times and simplified code maintenance, to name just two," says Google Engineer and Packt author Matt Frisbie in our 2017 Developer Talk Report. Netflix, Facebook and Airbnb have all adopted Isomorphic libraries for building their JS apps. Isomorphic JS apps are *fast*, operating off one base of code means that no time is spent loading and parsing any client-side JavaScript when a page is accessed. It might only be a second - but that slow load time can be all it takes to frustrate and lose a user. But Isomorphic apps are faster to render HTML content directly in browser, ensuring a better user experience overall. Isomorphic JavaScript isn't just quick for your users, it's also quick for you. By utilizing one framework that runs on both the client and the server, you'll open yourself up to a world of faster development times and easier code maintenance. What tools should I learn for Isomorphic JavaScript? The premier and most powerful tool for Isomorphic JS is probably Meteor - the fullstack JavaScript platform. With 10 lines of JavaScript in Meteor, you can do what will take you 1000s elsewhere. No need to worry about building your own stack of libraries and tools - Meteor does it all in one single package. Other Isomorphic-focused libraries include Rendr, created by Airbnb. Rendr allows you to build a Backbone.js + Handlebars.js single-page app that can also be fully rendered on the server-side - and was used to build the Airbnb mobile web app for drastically improved page load times. Rendr also strives to be a library, rather than a framework, meaning that it can be slotted into your stack as you like and gives you a bit more flexibility than a complete solution such as Meteor.
Read more
  • 0
  • 0
  • 14731

article-image-trending-datascience-news-handpicked-weekend-reading-24th-nov-17
Aarthi Kumaraswamy
24 Nov 2017
2 min read
Save for later

Handpicked for your Weekend Reading – 24th Nov ’17

Aarthi Kumaraswamy
24 Nov 2017
2 min read
We hope you had a great Thanksgiving and are having the time of your life shopping for your wishlist this weekend. The last thing you want to do this weekend is to spend your time you would rather spend shopping, scouring the web for content you would like to read. Here is a brief roundup of the best of what we published on the Datahub this week for your weekend reading. Thanksgiving Weekend Reading A mid-autumn Shopper’s dream – What an Amazon-fulfilled Thanksgiving would look like Data science folks have 12 reasons to be thankful for this Thanksgiving Black Friday Special - 17 ways in 2017 that online retailers use machine learning Through the customer’s eyes - 4 ways Artificial Intelligence is transforming e-commerce Expert in Focus Shyam Nath, director of technology integrations, Industrial IoT, GE Digital on  Why the Industrial Internet of Things (IIoT) needs Architects 3 Things that happened this week in Data Science News Amazon ML Solutions Lab to help customers “work backwards” and leverage machine learning Introducing Gluon- a powerful and intuitive deep learning interface New MapR Platform 6.0 powers DataOps Get hands-on with these Tutorials Visualizing 3D plots in Matplotlib 2.0 How to create 3D Graphics and Animation in R Implementing the k-nearest neighbors algorithm in Python Do you agree with these Insights & Opinions? Why you should learn Scikit-learn 4 ways Artificial Intelligence is leading disruption in Fintech 7 promising real-world applications of AI-powered Mixed Reality  
Read more
  • 0
  • 0
  • 14721

article-image-ai-chip-wars-brainwave-microsofts-answer-googles-tpu
Amarabha Banerjee
18 Oct 2017
5 min read
Save for later

AI chip wars: Is Brainwave Microsoft's Answer to Google's TPU?

Amarabha Banerjee
18 Oct 2017
5 min read
When Google decided to design their own chip with TPU, it generated a lot of buzz for faster and smarter computations with its ASIC-based architecture. Google claimed its move would significantly enable intelligent apps to take over, and industry experts somehow believed a reply from Microsoft was always coming (remember Bing?). Well, Microsoft has announced its arrival into the game – with its own real-time AI-enabled chip called Brainwave. Interestingly, as the two tech giants compete in chip manufacturing, developers are certainly going to have more options now, while facing the complex computational processes of modern day systems. What is Brainwave? Until recently, Nvidia was the dominant market player in the microchip segment, creating GPUs (Graphics Processing Unit) for faster processing and computation. But after Google disrupted the trend with its TPU (tensor processing unit) processor, the surprise package in the market has come from Microsoft. More so because its ‘real-time data processing’ Brainwave chip claims to be faster than the Google chip (the TPU 2.0 or the Cloud TPU chip). The one thing that is common between both Google and Microsoft chips is that they can both train and simulate deep neural networks much faster than any of the existing chips. The fact that Microsoft has claimed that Brainwave supports Real-Time AI systems with minimal lag, by itself raises an interesting question - are we looking at a new revolution in the microchip industry? The answer perhaps lies in the inherent methodology and architecture of both these chips (TPU and Brainwave) and the way they function. What are the practical challenges of implementing them in real-world applications? The Brainwave Architecture: Move over GPU, DPU is here In case you are wondering what the hype with Microsoft’s Brainwave chip is about, the answer lies directly in its architecture and design. The present-day complex computational standards are defined by high-end games for which GPUs (Graphical Processing Units) were originally designed. Brainwave differs completely from the GPU architecture: the core components of a Brainwave chip are Field Programmable Gate Arrays or FPGAs. Microsoft has developed a huge number of FPGA modules on top of which DNN (Deep Neural Network) layers are synthesized. Together, this setup can be compared with something similar to Hardware Microservices where each task is assigned by a software to different FPGA and DNN modules. These software controlled Modules are called DNN Processing Units or DPUs. This eliminates the latency of the CPU and the need for data transfer to and fro from the backend. The two methodologies involved here are seemingly different in their architecture and application: one is the hard DPU and the other is the Soft DPU. While Microsoft has used the soft DPU approach where the allocation of memory modules are determined by software and the volume of data at the time of processing, the hard DPU has a predefined memory allocation which doesn’t allow for flexibility so vital in real-time processing. The software controlled feature is exclusive to Microsoft, and unlike other AI processing chips, Microsoft have developed their own easy to process data types that are faster to process. This enables the Brainwave chip to perform near real-time AI computations easily.  Thus, in a way Microsoft brainwave holds an edge over the Google TPU when it comes to real-time decision making and computation capabilities. Brainwave’s edge over TPU 2 - Is it real time? The reason Google had ventured out into designing their own chips was their need to increase the number of data centers, with the increase in user queries. They had realized the fact that instead of running data queries via data centers, it would be far more plausible if the computation was performed in the native system. That’s where they needed more computational capabilities than what the modern day market leaders like Intel X86 Xeon processors and the Nvidia Tesla K80 GPUs offered. But Google opted for Application Specific Integrated Circuits (ASIC) instead of FPGAs, the reason being that it was completely customizable. It was not specific for one particular Neural Network but was rather applicable for multiple Networks. The trade-off for this ability to run multiple Neural Networks was of course Real Time computation which Brainwave could achieve because of using the DPU architecture. The initial data released by Microsoft shows that the Brainwave has a data transfer bandwidth of 20TB/sec, 20 times faster than the latest Nvidia GPU chip. Also, the energy efficiency of Brainwave is claimed to be 4.5 times better than the current chips. Whether Google would up their ante and improve on the existing TPU architecture to make it suitable for real-time computation is something only time can tell. [caption id="attachment_1064" align="alignnone" width="644"] Source: Brainwave_HOTCHIPS2017 PPT on Microsoft Research Blog[/caption] Future outlook and challenges Microsoft is yet to declare the benchmarking results for the Brainwave chip. But Microsoft Azure customers most definitely look forward to the availability of Brainwave chip for faster and better computational abilities. What is even more promising is Brainwave works seamlessly with Google’s TensorFlow and Microsoft’s own CNTK framework. Tech startups like Rigetti, Mythic and Waves are trying to create mainstream applications which will employ AI and quantum computation techniques. This will bring AI to the masses, by creating practical AI driven applications for daily consumers, and these companies have shown a keen interest in both the Microsoft and the Google AI chips. In fact, Brainwave will be most suited for these companies such as the above which are looking to use AI capabilities for everyday tasks, as they are less in number because of the limited computational capabilities of the current chips. The challenges with all AI chips, including Brainwave, will still revolve around their data handling capabilities, the reliability of performance, and on improving memory capabilities of our current hardware systems.
Read more
  • 0
  • 0
  • 14519

article-image-why-google-dart-will-never-win-battle-browser
Ed Gordon
30 Dec 2014
5 min read
Save for later

Why Google Dart Will Never Win The Battle For The Browser

Ed Gordon
30 Dec 2014
5 min read
This blog is not about programming languages as much as it’s about products and what makes good products (or more specifically, why good products sometimes don’t get used). I won’t talk about the advantages or disadvantages about the syntax or how they work as programming languages, but I will talk about the product side. We can all have an opinion on that, right? Real people use Dart. Really. I think we’ve all seen recently a growth in the number of adopters for ‘compile to JavaScript’ languages – TypeScript and Dart being the primary ones, and an honourable mention to CoffeeScript for trying before most others. Asana just switched out their hundreds of thousands of lines of JS code to TypeScript. I know that apps like Blosom are swapping out the JS-y bits of their code piece by piece. The axiom of my blog is that these things offer real developers (which I’m not) real advantages, right now.  They’re used because they are good products. They add productivity to a user-base that is famously short on time and always working to tight-deadlines. They take away no functionality (or very little, for the pedants out there) of JavaScript, but you get all the added benefits that the creators deigned to add. And for the select few, they can be a good choice. For online applications where a product lifespan may be 5 years, or less, worries about code support for the next 20 years (anyone who uses Perl still) melt away. They aren’t doing this because it’s hipster, they’re doing it because it works for them and that’s cool. I dig that. They will never, however, “ultimately… replace JavaScript as the lingua franca of web development”. Just missed the bull’s eye The main issue from a product perspective is that they are, by design, a direct response to the perceived shortcomings of JavaScript. Their value, and destiny as a product, is to be used by people who have struggled with JavaScript – is there anyone in the world who learned Dart before they learned JavaScript? They are linked to JavaScript in a way that limits their potential to that of JavaScript. If Dart is the Mercedes-Benz of the web languages (bear with me), then JavaScript is just “the car” (that is, all cars). If you want to drive over the alps, you can choose the comfort of a Merc if you can afford it, but it’s always going to ultimately be a car – four wheels that takes you from point-to-point. You don’t solve the problems of ‘the car’ by inventing a better car. You replace it by creating something completely different. This is why, perhaps, they struggle to see any kind of adoption over the long term. Google Trends can be a great proxy for market size and adoption, and as you can see “compile-to” languages just don’t seem to be able to hold ground over a long period of time After an initial peak of interest, the products tend to plateau or grow at a very slow rate. People aren’t searching for information on these products because in their limited capacity as ‘alternatives to JavaScript’ they offer no long term benefit to the majority of developers who write JavaScript. They have dedicated fans, and loyal users, but that base is limited to a small number of people. They are a ‘want’ product. No one needs them. People want the luxury of static typing, but you don’t need it. People want cleaner syntax, but don’t need it. But people need JavaScript. For “compile-to” languages to ever be more than a niche player, they need to transition from a ‘want’ product to a ‘need’ product. It’s difficult to do that when your product also ‘needs’ the thing that you’re trying to outdo. Going all out In fact, all the ‘compile to’ tools, languages and libraries have a glass ceiling that becomes pretty visible from their Google trends. Compared this to a Google language that IS its own product, Google Go, we can see stark differences Google Go is a language that offers an alternative to Python (and more, it’s a fully featured programming language), but it’s not even close to being Python. It can be used independently of Python – could you imagine if Google Go said, “We have this great product, but you can only use it in environments that already use Python. In fact, it compiles to Python. Yay.”. This could work initially, but it would stink for the long-term viability of Go as a product that’s able to grow organically and create its own ecosystem of tools, dedicated users, and carve out its own niche and area in which it thrives. Being decoupled from another product allows it to grow. A summary of sorts That’s not to say that JavaScript is perfect. It itself actually started as a language designed to coat-tail the fame of Java (albeit a very different language). And when there are so many voices trying to compete with it, it becomes apparent that not all is well with the venerable king of the web. ECMAScript 6 (and 7, 8, 9 ad infinitum) will improve on it, and make it more accessible – eventually incorporating in to it the ‘differences’ that set things like Dart and TypeScript apart, and taking the carpet from under their feet. It will remain the lingua france of the web until someone creates a product that is not beholden to JavaScript and not limited to what JavaScript can, or cannot, do. Dart will never win the battle for the browser. It is a product that many people want, but few actually need.
Read more
  • 0
  • 1
  • 14445

article-image-will-oracle-become-key-cloud-player-and-what-will-it-mean-development-architecture-com
Phil Wilkins
13 Jun 2017
10 min read
Save for later

Will Oracle become a key cloud player, and what will it mean to development & architecture community?

Phil Wilkins
13 Jun 2017
10 min read
This sort of question and provoke some emotive reactions, and many technologists despite the stereotype can get pretty passionate about our views. So let me put my cards on the table. My first book as an author is about Oracle middleware (Implementing Oracle Integration Cloud). I am Oracle Ace Associate (soon to be full Ace) which is comparable to a Java Rockstar, Microsoft MVP or SAP Mentor. I work for Capgemini as a Senior Consultant as large SI we work with many vendors, so I need to able have a feel for all options, even though I specialise in Oracle now. Before I got involved with Oracle I worked with primarily Open Source technologies particularly JBoss and Fuse (before and after both where absorbed into RedHat) and I have technically reviewed a number of Open source books for Packt. So I should be able to provide a balanced argument. So onto the … A lot has been said about Oracle’s CIO Larry Ellison and his position on cloud technologies. Most notably for rubbishing it 2008, which is ironic since those of us who remember the late 90s Oracle heavily committed to a concept called the Network Machine which could have led to a more cloud like ecosystem had the conditions been right. The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do. ... The computer industry is the only industry that is more fashion-driven than women’s fashion. Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?[1] Since then we’ve seen a slow change.  The first cloud offerings we saw came in the form of Mobile Cloud Service which provided a Mobile Backend as a Service (MBaaS). At this time Oracle’s extensive programme to try and rationalize its portfolio and bring the best ideas and design together from Peoplesoft, E-Business Suite, Siebel to a single cohesive product portfolio started to show progress – Fusion applications. Fusion applications built with the WebLogic core and exploiting other investments provided the company with a product that had the potential to become cloud enabled. If that initiative hadn’t been started when it did then Oracle’s position may look very different.  But from a solid standardised container based product portfolio the transition to cloud has become a great deal easier, facilitated by the arrival of Oracle database 12c which provided the means to easily make the data storage at least multi-tenant. This combination gave Oracle its ability to then sell ERP modules as SaaS and meant that Oracle cloud start to think about competing with the SaaS darlings of SalesForce, NetSuite and Workday. However,ERPs don’t live in isolation. Any organisation has to deal with its oddities, special needs, departmental solutions as well as those systems that are unique and differentiate companies form their competition. This has driven the need to provide the means to provide PaaS and IaaS. Not only that, Oracle themselves admitted making SaaS as cost effective as possible it needed to revise the infrastructure and software platform to maximise the application density. A lesson that Amazon with AWS has long understood from the outset and done well in realizing. It has also had the benefit of being a later starter, looked at what has and hasn’t worked, and used to its deep pockets to ensure it got the best skills to build the ideal answers by passing many of the mistakes and issues the pioneers had to go through. This brought us to the state a couple of years ago, where its core products had a cloud existence and Oracle where making headway winning new mid-market customers – after all Oracle ERP is seen as something of a Rolls Royce of ERPs, globally capable and well tested and now cost accessible to more of the mid-market. So as an ERP vendor Oracle will continue to be a player, if there is a challenger, Oracle’s pockets are deep enough to buy the competition which is what happened with Netsuite.This maybe very interesting to enterprise architects who need to take off the shelf building blocks and provide the solid corporate foundation, but those of us who prefer to build, do something different not so exciting. In the last few years we have seen a lot of talk about digital disruptors, the need for serious agility (as in the ability to change and react rather than the development ethos). To have this capability you need to be able build, radically change solutions quickly and yet still work with those core backbone accounting tasks.  To use a Gartner expression, we need to be bimodal[2], to innovate.  When applications packages change comparatively slowly (they need to be slow and steady, if you want to show that your accounting isn’t going to look like Enron[3] or Lehmann Brothers[4]). With this growing need to drive innovation and change ever faster we have seen some significant changes in the way things tend to be done. In a way the need to innovate has impacted to the point that,you could almost say in the process of trying to disrupt existing businesses through IT we have achieved the disruption of software development. With the facilitation of the cloud particularly IaaS, the low cost of startupandtry new solutions and either grow them if they succeed or mothball them with minimal capital loss or delay if they don't; we have seen … The pace of service adoption accelerate exponentially meaning the rate of scale up and dynamic demand particularly for end user facing services has needed new techniques for scaling. Standards moving away from being formulated by committee of companies wanting to influence/dominate a market segment which while resulted in some great ideas (UDDI as a concept was fabulous) but often very unwieldy (ebXML, SOAP, UDDI for example) to simpler standards that have largely evolved through simplicity and quickly recognized value (JSON, REST) to become de-facto standards. New development paradigms that enable large solutions to be delivered whilst still keeping delivery on short cycles and supporting organic change (Agile, microservices). Continuous Integration and DevOps breaking down organisational structures and driving accountability – you build it, you make it run. The open source business model as a way to break into the industry with a new software technologywithout needing deep pockets for marketing etc has become the predominant. route in, and at the same time acceptance that open source software can be as well supported as a closed source product. For a long time, despite Oracle being the ‘guardian’ for Java and then a little more recently MySQL they haven't really managed to establish themselves as a ‘cool’ vendor. If you wanted, a cool vendor you’d historically probably look at RedHat one of the first businesses to really get open source and community thinking. The perception at least has been Oracle have acquired these technologies either as a biproduct of a bigger game or as a view as creating an ‘on ramp’ to their bigger more profitable products. Oracle have started to recognise that to be seriously successful in the cloud like AWS you need to be pretty pervasive and not only connect with the top of the decision tree but also those at the code face. To do that you need a bit of the ‘cool’ factor. That means doing things beyond just the database and your core middleware. These areas are becoming more and more frequently being subject to potential disruption such as Hadoop and big data, NoSQL and things like Kafka in the middleware space. This also fits with the narrative that do do well with SaaS you at least a very good IaaS and the way Oracle has approached SaaS you definitely need good PaaS. So they might as well also make these commercial offerings. This has resulted in Oracle moving from half dozen cloud offerings to something in the order of nearly forty offerings classified as PaaS. Plus a range of IaaS offerings that will appeal to developers and architects such as direct support for Docker through to Container Cloud which provides a simplified Docker model, and onto Kafka, Node.js, MySQL, NoSQL and others. The web tier is pretty interesting with JET which is an enterprise hardened certified version of Angular, React and Express with extra tooling which has been made available as open source. So the technology options are becoming a lot more interesting. Oracle are also starting to target new startups and looking to get new organisations onto the Oracle platform from day one, in the same way it is easy for a startup to leverage AWS. Oracle have made some commitment to the Java developer community though JavaOne which runs alongside the big brother conference of Open World. They are now seriously trying to reach out to the hardcore development community (not just Java as the new Oracle cloud offerings are definitely polyglot) through Oracle Code. I was fortunate enough to present at the London occurrence of the event (see my blog here). What Oracle has not yet quiet reached the point of being clearly easy to start working with compared to AWSand Azure. Yes, Oracle provide new sign ups with 300 dollars of credit but when you have a reputation (deserved or otherwise) of being expensive it isn't going to necessarily get people onboard in droves – say compared to AWS’s free micro-instance for a year. Conclusion  In all of this, I am of the view that Oracle are making headway, they are recognising what needs to be done to be a player; I have said in the past, and I believe it is still true – Oracle is a like an oil tanker or aircraft carrier, takes time to decide to turn, and turning isn't quick, but once a coarse is set a real head of stream and momentum will be built, and I wouldn't want to be in the company’s path.so let’s look at some hard facts – Oracle’s revenues remain pretty steady, surprisingly Oracle showed up in the last week on LinkedIn’s top employers list[5]. Oracle isn’t going to just disappear, it's Database business alone will keep it alive for a very long time to come. Its SaaS business appears to be on a good trajectory although more work on API enablement needs to take place. As an IaaS andPaaS technology provider Oracle appear to be getting a handle on things. Oracle is going to be attractive to end user executives as it is one of the very few vendors that covers all tiers of cloud from IaaS to PaaS providing the benefits of traditional hosting when needed and fully managed solutions and the benefits it offers.Oracle does still need to overcome some perception challenges, in many respects Oracle are seen in the same way Microsoft were in 90s and 2000s, something as a necessary evil and can be expensive. [1]http://www.businessinsider.com/best-larry-ellison-quotes-2013-4?op=1&IR=T/#oud-computing-maybe-im-an-idiot-but-i-have-no-idea-what-anyone-is-talking-about-1 [2]http://www.gartner.com/it-glossary/bimodal/ [3]http://www.investopedia.com/updates/enron-scandal-summary/ [4]https://en.wikipedia.org/wiki/Bankruptcy_of_Lehman_Brothers [5]https://www.linkedin.com/pulse/linkedin-top-companies-2017-where-us-wants-work-now-daniel-roth
Read more
  • 0
  • 0
  • 14342

article-image-5-things-that-matter-application-development-2018
Richard Gall
11 Dec 2017
4 min read
Save for later

5 things that will matter in application development in 2018

Richard Gall
11 Dec 2017
4 min read
Things change quickly in application development. Over the past few years we've seen it merge with other fields. With the web become more app-like, DevOps turning everyone into a part-time sysadmin (well, sort of), and the full-stack trend shifting expectations about the modern programmer skill set, the field has become incredibly fluid and open. That means 2018 will present a wealth of challenges of application developers - but of course there will also be plenty of opportunities for the curious and enterprising… But what's going to be most important in 2018? What's really going to matter? Take a look below at our list of 5 things that will matter in application development in 2018. 1. Versatile languages that can be used on both client and server Versatility is key to be a successful programmer today. That doesn't mean the age of specialists is over, but rather you need to be a specialist in everything. And when versatility is important to your skillset, it also becomes important for the languages we use. It's for that reason that we're starting to see the increasing popularity of languages like Kotlin and Go. It's why Python continues to be popular - it's just so versatile. This is important when you're thinking about how to invest your learning time. Of course everyone is different, but learning languages that can help you do multiple things and solve different problems can be hugely valuable. Investing your energy in the most versatile languages will be well worth your time in 2018. 2. The new six month Java release cycle This will be essential for Java programmers in 2018. Starting with the release of Java 9 early in 2018, the new cycle will kick in. This might mean there's a little more for developers to pay attention to, but it should make life easier, as Oracle will be able to update and add new features to the language with greater effectiveness than ever before. From a more symbolic point of view, this move hints a lot at the deepening of open source culture in 2018, with Oracle aiming to satisfy developers working on smaller systems, keen to constantly innovate, as much as its established enterprise clients. 3. Developing usable and useful conversational UI Conversational UI has been a 'thing' for some time now, but it hasn't quite captured the imagination of users. This is likely because it simply hasn't proved that useful yet - like 3D film it feels like too much of a gimmick, maybe even too much of a hassle. It's crucial - if only to satisfy the hype - that developers finally find a way to make conversational UI work. To really make it work we're ultimately going to need to join the dots between exceptionally good artificial intelligence and a brilliant user experience - making algorithms that 'understand' user needs, and can adapt to what people want. 4. Microservices Microservices certainly won't be new in 2018, but they are going to play a huge part in how software is built in 2018. Put simply, if they're not important to you yet, they will be. We're going to start to see more organizations moving away from monolithic architectures, looking to engineering teams to produce software in ways that is much more dynamic and much more agile. Yes, these conversations have been happening for a number of years; but like everything when it comes to tech, change happens at different speeds. It's only now as technologies mature, developer skillsets change, and management focus shifts that broader changes take place. 5. Taking advantage of virtual and augmented reality Augmented Reality (AR) and Virtual Reality (VR) have been huge innovations within fields like game development. But in 2018, we're going to see both expand beyond gaming and into other fields. It's already happening in many areas, such as healthcare, and for engineers and product developers/managers, it's going to be an interesting 12 months to see how the market changes.
Read more
  • 0
  • 0
  • 14321
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-five-benefits-dot-net-going-open-source
Ed Bowkett
12 Dec 2014
2 min read
Save for later

Five Benefits of .NET Going Open Source

Ed Bowkett
12 Dec 2014
2 min read
By this point, I’m sure almost everyone has heard of the news about Microsoft’s decision to open source the .NET framework. This blog will cover what the benefits of this decision are for developers and what it means. Remember this is just an opinion and I’m sure there are differing views out there in the wider community. More variety People no longer have to stick with Windows to develop .NET applications. They can choose between operating systems and this doesn’t lock developers down. It makes it more competitive and ultimately, opens .NET up to a wider audience. The primary advantage of this announcement is that .NET developers can build more apps to run in more places, on more platforms. It means a more competitive marketplace, and improves developers and opens them up to one of the largest growing operating systems in the world, Linux. Innovate .NET Making .NET open source allows the code to be revised and rewritten. This will have dramatic outcomes for .NET and it will be interesting to see what developers do with the code as they continually look for new functionalities with .NET. Cross-platform development The ability to cross-develop on different operating systems is now massive. Previously, this was only available with the Mono project, Xamarin. With Microsoft looking to add more Xamarin tech to Visual Studio, this will be an interesting development to watch moving into 2015. A new direction for Microsoft By opening .NET up as open source software, Microsoft seems to have adopted a more "developer-friendly" approach under the new CEO, Satya Nadella. That’s not to say the previous CEO ignored developers, but by being more open as a company, and changing its view on open source, has allowed Microsoft to reach out to communities easier and quicker. Take the recent deal Microsoft made with Docker and it looks like Microsoft is heading in the right direction in terms of closing the gap between the company and developers. Acknowledgement of other operating systems When .NET first came around, around 2002, the entire world ran on Windows—it was the head operating system, certainly in terms of the mass audience. Today, that simply isn’t the case—you have Mac OSX, you have Linux—there is much more variety, and as a result .NET, by going open source, have acknowledged that Windows is no longer the number one option in workplaces.
Read more
  • 0
  • 0
  • 14316

article-image-swift-missing-pieces-surviving-change
Nicholas Maccharoli
14 Mar 2016
5 min read
Save for later

Swift: Missing Pieces & Surviving Change

Nicholas Maccharoli
14 Mar 2016
5 min read
Change Swift is still a young language when compared to other languages like C, C++, Objective-C, Ruby, and Python. Therefore it is subject to major changes that will often result in code breaking for simple operations like calculating the length of a string. Packaging functionality that is prone to change into operators, functions or computed properties may make dealing with these transitions easier. It will also reduce the number of lines of code that need to be repaired every time Swift undergoes an update. Case study: String Length A great example of something breaking between language updates is the task of getting a string’s character length. In versions of Swift prior to 1.2, the way to calculate the length of a native string was countElements(myString), but then in version 1.2 it became just count(myString). Later at WWDC 2015 Apple announced that many functions that were previously global –such as count - were now implemented as protocol extensions. This resulted in once again having to rewrite parts of existing code as myString.characters.count. So how can one make these code repairs between updates more manageable? With a little help from our friends Computed Properties of course! Say we were to write a line like this every time we wanted to get the length of a string: let length = count(myString) And then all of a sudden this method becomes invalid in the next major release and we have unfortunately calculated the length of our strings this way in, say, over fifty places. Fixing this would require a code change in all fifty places. But could this have been mitigated? Yes, we could have used a computed property on the string called length right from the start: extension String { var length : Int { return self.characters.count } } Had our Swift code originally been written like this, all that would be required is a one line change. This is because the other fifty places would still be receiving a valid Int from the call myString.length. Missing Pieces Swift has some great shorthand and built in operators for things like combining strings - let fileName = fileName + ".txt" - and appending to arrays - waveForms += ["Triangle", "Sawtooth"]. So what about adding one dictionary to another? //Won't work let languageBirthdays = ["C" : 1972, "Objective-C": 1983] + ["python" : 1991, "ruby" : 1995] But it works out of the box in Ruby: compiled = { "C" => 1972, "Objective-C" => 1983 } interpreted = { "Ruby" => 1995, "Python" => 1991 } programming_languages = compiled.merge(interpreted) And Python does not put up much of a fuss either: compiled = {"C":1972, 'Objective-C': 1983} interpreted = {"Ruby":1995, "Python": 1991} programming_languages = compiled.update(interpreted) So how can we make appending one dictionary to another go as smoothly as it does with other container types like arrays in Swift? By overloading the + and += operators to work with dictionaries of course! func + <Key, Value> (var lhs: Dictionary<Key, Value>, rhs: Dictionary<Key, Value>) -> Dictionary<Key, Value> { rhs.forEach { lhs[$0] = $1 } return lhs } func += <Key, Value> (inout lhs: Dictionary<Key, Value>, rhs: Dictionary<Key, Value>) -> Dictionary<Key, Value> { lhs = lhs + rhs return lhs } With a light application of generics and operator overloading we can make the syntax for dictionary addition the same as the syntax for array addition. Operators FTW: Regex Shorthand One thing that you may have encountered during your time with Swift is the lack of support for regular expressions. At the time of writing, Swift is currently at version 2.1.1 and there is no Regular Expression support in the Swift Standard Library. The next best thing to do is to rely on a third party library or Foundation Framework's NSRegularExpression. The issue is that writing code to use NSRegularExpression to find a simple match is a bit long winded every time you wish to check for a match. Putting it into a function is not a bad idea either, but defining an operator may make our code a bit more compact. Taking inspiration from Ruby's =~ regex operator, let’s make a simple version returning a bool representing if there was a match: infix operator =~ { associativity left precedence 140 } func =~ (lhs: String, rhs: String) -> Bool { if let regex = try? NSRegularExpression(pattern: rhs, options: NSRegularExpressionOptions.CaseInsensitive) { let matches = regex.matchesInString(lhs, options: NSMatchingOptions.ReportCompletion, range: NSMakeRange(0, lhs.length)) return matches.count > 0 } else { return false } } (Take note of our trusty length computed property springing to action.) This time around there is no operator as of Swift 2.1 called =~. Therefore, we need to first define the symbol telling the Swift compiler that it is an operator that is infix taking objects on the left and right side, with a precedence of 140, and its associativity is left. Associativity and precedence only matter when there are multiple operators chained together, but I imagine most uses of this operator being something like: guard testStatus =~ "TEST SUCCEEDED" else { reportFailure() } Have fun but be courteous It would be wise to observe The Law of the Instrument and not treat everything as a nail just because you have a hammer in arm’s reach. When making the decision to wrap functionality into an operator or use a computed property in place of the canonical way of coding something explicitly, first ask yourself if this is really improving readability. It could be that you’re just reducing the amount of typing – think about how easily the next person reading your code could adapt. If you want to create even better Swift apps then check out our article to make the most of the Flyweight pattern in Swift - perfect when you need a large number of similar objects! About the author Nick Maccharoli is an iOS / Backend developer and Open Source enthusiast working at a startup in Tokyo and enjoying the current development scene. You can see what he is up to at @din0sr or github.com/nirma
Read more
  • 0
  • 0
  • 14289

article-image-simple-player-health
Gareth Fouche
22 Dec 2016
8 min read
Save for later

Simple Player Health

Gareth Fouche
22 Dec 2016
8 min read
In this post, we’ll create a simple script to manage player health, then use that script and Unity triggers to create health pickups and environmental danger (lava) in a level. Before we get started on our health scripts, let’s create a prototype 3D environment to test them in. Create a new project with a new scene. Save this as “LavaWorld”. Begin by adding two textures to the project, a tileable rock texture and a tileable lava texture. If you don’t have those assets already, there are many sources of free textures online. Click Here is a good start. Create two new Materials named “LavaMaterial” and “RockMaterial” to match the new textures by right-clicking in the Project pane and selecting Create > Material. Drag the rock texture into the Albedo slot of RockMaterial. Drag the lava texture into the Emission slot of LavaMaterial to create a glowing lava effect. Now our materials are ready to use. In the Hierarchy view, use Create > 3D Object > Cube to create a 3D cube in the scene. Drag RockMaterial into the Materials > Element 0 slot on the Mesh Renderer of your cube in order to change the cube texture from the default blue material to your rock texture. Use the scale controls to stretch and flatten the cube. We now have a simple “rock platform”. Copy and paste the platform a few times, moving the new copies away to form small “islands”. Create a few more copies of the rock platform, scale them so that they’re long and thin, and position them as bridges between the “islands”. For example: Now, create a new cube named “LavaVolume”, and assign it the LavaMaterial. Scale it so that it is large enough to encompass all the islands but shallow (scale the y-axis height down). Move it so that it is lower than the islands, and so they appear to float in a lava field. In order to make it possible that a player can fall into the lava, check the BoxCollider’s “Is Trigger” property on LavaVolume. The Box Collider will now act as a Trigger volume, no longer physically blocking objects that come into contact with it, but notifying the script when an object moves through the collider volume. This presents a problem, as objects will now fall through the lava into infinite space! To deal with this problem, make another copy of the rock platforms and scale/position it so that it’s a similar dimension to the lava, also wide but flat, and position it just below the lava. So it forms a rock “floor” under the lava volume. To make your scene a little nicer, repeat the process to create rock walls around the lava, hiding where the lava volume ends. A few point lights ( Create > Light > Point Light) scattered around the islands will also add interesting visual variety. Now it’s time to add a player! First, import the “Standard Assets” package from the Unity Asset Store (if you don’t know how to do this, google the Unity Asset Store to learn about it). In the newly imported Standard Assets Project folder, go to Characters > FirstPersonCharacter > Prefabs. There you will find the FPSController prefab. Drag it into your scene, rename it to “Player” and position it on one of the islands, like so: Delete the old main camera that you had in your scene; the FPSController has its own camera. If you run the project, you should be able to walk around your scene, from island to island. You can also walk in the lava, but it doesn’t harm you, yet. To make the lava an actual threat, we start by giving our player the ability to track its health. In the Project Pane, right-click and select Create > C# Script. Name the script “Player”. Drag the Player script onto the Player object in the Hierarchy view. Open the script in Visual Studio, and add code as follows: This script exposes a variable, maxHealth, which determines how much health the Player starts with and the maximum health they can ever have. It exposes a function to alter the Player’s current health. And it uses a reference to a Text object to display the Player’s current health on screen. Back to Unity, you can now see the Max Health Property exposed in the inspector. Set Max Health to 100. There is also a field for Current Health Label, but we don’t currently have a GUI. To remedy this, in the Hierarchy view, select Create > UI > Canvas and then Create > UI > Label. This will create the UI root and a text label on it. Change the label’s text to “Health:”, the font size to 20 and colour to white. Drag it to the bottom left corner of the screen (and make sure the Rect Transform anchor is set to bottom left). Duplicate that text label, offset it right a little from the previous text label and change the text to “0”. Rename this new label “CurrentHealthLabel”. The GUI should now look like this: In the Hierarchy view, drag CurrentHealthLabel into your Player script’s “Current Health Label” property. If we run now, we’ll have a display in the bottom corner of the screen showing our Player’s health of 100. By itself, this isn’t particularly exciting. Time to add lava! Create a new c# script as before; call it Lava. Add this Lava script to the LavaVolume scene object. Open the script in Visual Studio and insert the following code: Note the TriggerEnter and TriggerExit functions. Because LavaVolume, the object we’ve added this script to, has a collider with Is Trigger checked, whenever another object enters LavaVolume’s box collider, OnTriggerEnter will be called, with the colliding object’s Collider passed as a parameter. Similarly, when an object leaves LavaVolume’s collider volume, OnTriggerExit will be called. Taking advantage of this functionality, we keep a list of all players who enter the lava. Then, during the Update call, if any players are in the lava, we apply damage to them periodically. damageTickTime determines an interval between every time we apply damage (a “tick”), and damagePerTick determines how much damage we apply per tick. Both properties are exposed in the Inspector by the script, so that they’re customizable. Set the values to Damage Per Tick = 5 and Damage Tick Time = 0.1. Now, if we run the game, stepping in the lava hurts! But, it’s a bit of an anti-climax, since nothing actually happens when our health gets down to 0. Let’s make things a little more fatal. First, use a paint program to create a “You Died!” screen at 1920 x 1080 resolution. Add that image to the project. Under the Import Settings, set the Texture Type to Sprite (2D and UI). Then, from the Hierarchy, select Create > UI > Image. Make the size 1920 x 1080, and set the Source Image property to your new player died sprite image. Go back to your Player Script and extend the code as follows: The additions add a reference to the player died screen, and code in the CheckDead function to check if the player’s health reaches 0, displaying the death screen if it does. The function also disables the FirstPersonController script if the player dies, so that the player can’t continue to move Player around via keyboard/mouse input after Player has died. Return to the Hierarchy view, and drag the player died screen into the exposed Dead Screen property on the Player script. Now, if you run the game, stepping in lava will “kill” the player if they stay in it long enough. Better! But it’s only fair to add a way for the Player to recover health, too. To do so, use a paint program to create a new “medkit” texture. Following the same procedure as used to create the LavaVolume, create a new cube called HealthKit, give it a Material that uses this new medkit texture, and enable “Is Trigger” on the cube’s BoxCollider. Create a new C# script called “Health Pickup”, add it to the cube, and insert the following code: Simpler than the Lava script, this adds health to a Player that collides with it, before disabling itself. Scale the HealthKit object until it looks about the right size for a health pack; then copy and paste a few of the packs across the islands. Now, when you play, if you manage to extricate yourself from the lava after falling in, you can collect a health pack to restore your health! And that brings us to the end of the Simple Player Health tutorial. We have a deadly lava level with health pickups, just waiting for enemy characters to be added. About the author Gareth Fouche is a game developer. He can be found on Github at @GarethNN
Read more
  • 0
  • 0
  • 14212

article-image-what-are-edge-analytics
Peter Worthy
22 May 2017
5 min read
Save for later

What are Edge Analytics?

Peter Worthy
22 May 2017
5 min read
We already know that mobile is a big market with growing opportunities.  We are also hearing about the significant revenue that the IoT will generate. Machina Research predicts that the revenue opportunity will increase to USD$4 trillion by 2025. In the mainstream, both of these technologies are heavily reliant on the cloud, and as they become more pervasive, issues such as response delay and privacy are starting to surface. That’s where Edge Computing and Edge Analytics can help. Cloud, Fog and Mist  As the demand for more complex applications on mobile increases, we needed to offload some of the computational demands from the mobile device. An example is speech recognition and processing applications such as Siri, which needs to access cloud-based servers in order to process user’s requests.  Cloud enabled a wide range of services to be delivered on mobile due to almost unlimited processing and storage capability. However, the trade-off was the delay arising from the fact that the cloud infrastructure was often a large distance from the device.  The solution is to move some of the data processing and storage to either a location closer to the device (a “cloudlet” or “edge node”) or to the device itself. “Fog Computing” is where some of the processing and storage of data occurs between the end devices and cloud computing facilities. “Mist Computing” is where the processing and storage of data occurs in the end devices themselves. These are collectively known as “Edge Computing” or “Edge Analytics” and, more specifically for mobile, “Mobile Edge Computing” (MEC).  The benefits of Edge Analytics  As a developer of either mobile or IoT technology, Edge Analytics provides significant benefits. Responsiveness In essence, the proximity of the cloudlets or edge nodes to the end devices reduces latency in response. Often high bandwidth is possible and jitter is reduced. This is particularly important where a service is sensitive to response delays or to services with high processing demands such as VR or AR.  Scalability By processing the raw data either in the end device or in the cloudlet, the demands that are placed on the central cloud facility is reduced because smaller volumes of data need to be sent to the cloud. This allows a greater number of connections to that facility.  Maintaining privacy Maintaining privacy is a significant concern for IoT. Processing data in either end devices or at cloudlets gives the owner of that data the ability to control the data that is release before it is sent to the cloud. Further, the data can be made anonymous or aggregated before transmission.  Increased development flexibility  Developers of mobile or IoT technology are able to use more contextual information and a wider range of SDKs specific to the device.  Dealing with cloud outages In March this year, Amazon AWS had a server outage, causing significant disruption for many services that relied upon their S3 storage service. Edge computing and analytics could effectively allow your service to remain operational through a fallback to a cloudlet.  Examples  IoT technology is being used to monitor and then manage traffic conditions in a specific location. Identifying traffic congestion, determining the source of that congestion and determining alternative routes requires fast data processing. Using cloud computing results in response delays associated with the transmission of significant volumes of data both to and from the cloud. Edge Analytics means the data is processed closer to that location, and then sent to drivers in much shorter time.  Another example is supporting the distribution of localized content such as the addition of advertisements to a video stream that is only being distributed within a small area without having to divert the stream to another server for processing.  Open issues  As with all emerging technology, there are a number of open or unresolved issues for Edge Analytics. A major and non-technical issue is, what is the business model that will support the provision of cloudlets or edge nodes, and what will be the consequent cost of providing these services? Security also remains a concern: how will the perimeter security of cloudlets compare to that implemented in cloud facilities? As IoT continues to grow, so will the variety of needs for the processing and management of that data. How will cloudlets cope with this demand?  Increased responsiveness, flexibility, and greater control over data to reduce privacy breach risk are strong (and not the only) reasons for adopting Edge Analytics in the development of your mobile or IoT service. It presents a real opportunity in the differentiation of service offerings in a market that is only going to get more competitive in the near future. Consider both the different options that are available to you, and the benefits and pitfalls of those options.  About the author Peter Worthy is an Interaction Designer currently completing a PhD exploring human values and the design of IoT technology in a domestic environment.  Professionally Peter’s interests range from design research methods, understanding HCI and UX in emerging technologies, and physical computing.  Pete also works at a University tutoring across a range of subjects and supporting a project that seeks to develop context aware assistive technology for people living with dementia.
Read more
  • 0
  • 0
  • 14190
article-image-data-science-folks-12-reasons-thankful-thanksgiving
Savia Lobo
21 Nov 2017
8 min read
Save for later

Data science folks have 12 reasons to be thankful for this Thanksgiving

Savia Lobo
21 Nov 2017
8 min read
We are nearing the end of 2017. But with each ending chapter, we have remarkable achievements to be thankful for. Similarly, for the data science community, this year was filled with a number of new technologies, tools, version updates etc. 2017 saw blockbuster releases such as PyTorch, TensorFlow 1.0 and Caffe 2, among many others. We invite data scientists, machine learning experts, and other data science professionals to come together on this Thanksgiving Day, and thank the organizations, which made our interactions with AI easier, faster, better and generally more fun. Let us recall our blessings in 2017, one month at a time... [dropcap]Jan[/dropcap] Thank you, Facebook and friends for handing us PyTorch Hola 2017! While the world was still in the New Year mood, a brand new deep learning framework was released. Facebook along with a few other partners launched PyTorch. PyTorch came as an improvement to the popular Torch framework. It now supported the Python language over the less popular Lua. As PyTorch worked just like Python, it was easier to debug and create unique extensions. Another notable change was the adoption of a Dynamic Computational Graph, used to create graphs on the fly with high speed and flexibility. [dropcap]Feb[/dropcap] Thanks Google for TensorFlow 1.0 The month of February brought Data Scientist’s a Valentine's gift with the release of TensorFlow 1.0. Announced at the first annual TensorFlow Developer Summit, TensorFlow 1.0 was faster, more flexible, and production-ready. Here’s what the TensorFlow box of chocolate contained: Fully compatibility with Keras Experimental APIs for Java and Go New Android demos for object and image detection, localization, and stylization A brand new Tensorflow debugger An introductory glance of  XLA--a domain-specific compiler for TensorFlow graphs [dropcap]Mar[/dropcap] We thank Francois Chollet for making Keras 2 a production ready API Congratulations! Keras 2 is here. This was a great news for Data science developers as Keras 2, a high- level neural network API allowed faster prototyping. It provided support both CNNs (Convolutional Neural Networks) as well as RNNs (Recurrent Neural Networks). Keras has an API designed specifically for humans. Hence, a user-friendly API. It also allowed easy creation of modules, which meant it is perfect for carrying out an advanced research. Developers can now code in  Python, a compact, easy to debug language. [dropcap]Apr[/dropcap] We like Facebook for brewing us Caffe 2 Data scientists were greeted by a fresh aroma of coffee, this April, as Facebook released the second version of it’s popular deep learning framework, Caffe. Caffe 2 came up as a easy to use deep learning framework to build DL applications and leverage community contributions of new models and algorithms. Caffe 2 was fresh with a first-class support for large-scale distributed training, new hardware support, mobile deployment, and the flexibility for future high-level computational approaches. It also provided easy methods to convert DL models built in original Caffe to the new Caffe version. Caffe 2 also came with over 400 different operators--the basic units of computation in Caffe 2. [dropcap]May[/dropcap] Thank you, Amazon for supporting Apache MXNet on AWS and Google for your TPU The month of May brought in some exciting launches from the two tech-giants, Amazon and Google. Amazon Web Services’ brought Apache MXNet on board and Google’s Second generation TPU chips were announced. Apache MXNet, which is now available on AWS allowed developers to build Machine learning applications which can train quickly and run anywhere, which means it is a scalable approach for developers. Next up, was Google’s  second generation TPU (Tensor Processing Unit) chips, designed to speed up machine learning tasks. These chips were supposed to be (and are) more capable of CPUs and even GPUs. [dropcap]Jun[/dropcap] We thank Microsoft for CNTK v2 The mid of the month arrived with Microsoft’s announcement of the version 2 of its Cognitive Toolkit. The new Cognitive Toolkit was now enterprise-ready, had production-grade AI and allowed users to create, train, and evaluate their own neural networks scalable to multiple GPUs. It also included the Keras API support, faster model compressions, Java bindings, and Spark support. It also featured a number of new tools to run trained models on low-powered devices such as smartphones. [dropcap]Jul[/dropcap] Thank you, Elastic.co for bringing ML to Elastic Stack July made machine learning generally available for the Elastic Stack users with its version 5.5. With ML, the anomaly detection of the Elasticsearch time series data was made possible. This allows users to analyze the root cause of the problems in the workflow and thus reduce false positives. To know about the changes or highlights of this version visit here. [dropcap]Aug[/dropcap] Thank you, Google for your Deeplearn.js August announced the arrival of Google’s Deeplearn.js, an initiative that allowed Machine Learning models to run entirely in a browser. Deeplearn.js was an open source WebGL- accelerated JS library. It offered an interactive client-side platform which helped developers carry out rapid prototyping and visualizations. Developers were now able to use hardware accelerator such as the GPU via the webGL and perform faster computations with 2D and 3D graphics. Deeplearn.js also allowed TensorFlow model’s capabilities to be imported on the browser. Surely something to thank for! [dropcap]Sep[/dropcap] Thanks, Splunk and SQL for your upgrades September surprises came with the release of Splunk 7.0, which helps in getting Machine learning to the masses with an added Machine Learning Toolkit, which is scalable, extensible, and accessible. It includes an added native support for metrics which speed up query processing performance by 200x. Other features include seamless event annotations, improved visualization, faster data model acceleration, a cloud-based self-service application. September also brought along the release of MySQL 8.0 which included a first-class support for Unicode 9.0. Other features included are An extended support for native JSOn data Inclusion of windows functions and recursive SQL syntax for queries that were previously impossible or difficult to write Added document-store functionality So, big thanks to the Splunk and SQL upgrades. [dropcap]Oct[/dropcap] Thank you, Oracle for the Autonomous Database Cloud and Microsoft for SQL Server 2017 As Fall arrived, Oracle unveiled the World’s first Autonomous Database Cloud. It provided full automation associated with tuning, patching, updating and maintaining the database. It was self scaling i.e., it instantly resized compute and storage without downtime with low manual administration costs. It was also self repairing and guaranteed 99.995 percent reliability and availability. That’s a lot of reduction in workload! Next, developers were greeted with the release of SQL Server 2017 which was a major step towards making SQL Server a platform. It included multiple enhancements in Database Engine such as adaptive query processing, Automatic database tuning, graph database capabilities, New Availability Groups, Database Tuning Advisor (DTA) etc. It also had a new Scale Out feature in SQL Server 2017 Integration Services (SSIS) and SQL Server Machine Learning Services to reflect support for Python language. [dropcap]Nov[/dropcap] A humble thank you to Google for TensorFlow Lite and Elastic.co for Elasticsearch 6.0 Just a month more for the year to end!! The Data science community has had a busy November with too many releases to keep an eye on with Microsoft Connect(); to spill the beans. So, November, thank you for TensorFlow Lite and Elastic 6. Talking about TensorFlow Lite, a lightweight product  for mobile and embedded devices, it is designed to be: Lightweight: It allows inference of the on-device machine learning models that too with a small binary size, allowing faster initialization/ startup. Speed: The model loading time is dramatically improved, with an accelerated hardware support. Cross-platform: It includes a runtime tailormade to run on various platforms–starting with Android and iOS. And now for Elasticsearch 6.0, which is made generally available. With features such as easy upgrades, Index sorting, better Shard recovery, support for Sparse doc values.There are other new features spread out across the Elastic stack, comprised of Kibana, Beats and Logstash. These are, Elasticsearch’s solutions for visualization and dashboards, data ingestion and log storage. [dropcap]Dec[/dropcap] Thanks in advance Apache for Hadoop 3.0 Christmas gifts may arrive for Data Scientists in the form of General Availability of Hadoop 3.0. The new version is expected to include support for Erasure Encoding in HDFS, version 2 of the YARN Timeline Service, Shaded Client Jars, Support for More than 2 NameNodes, MapReduce Task-Level Native Optimization, support for Opportunistic Containers and Distributed Scheduling to name a few. It would also include a rewritten version of Hadoop shell scripts with bug fixes, improved compatibility and many changes in some existing installation procedures. Pheww! That was a large list of tools for Data Scientists and developers to thank for this year. Whether it be new frameworks, libraries or a new set of software, each one of them is unique and helpful to create data-driven applications. Hopefully, you have used some of them in your projects. If not, be sure to give them a try, because 2018 is all set to overload you with new, and even more amazing tools, frameworks, libraries, and releases.
Read more
  • 0
  • 0
  • 14049

article-image-cyber-security-and-internet-things
Owen Roberts
12 Jun 2016
4 min read
Save for later

Cyber Security and the Internet of Things

Owen Roberts
12 Jun 2016
4 min read
We’re living in a world that’s more connected than we once ever thought possible. Even 10 years ago, the idea of our household appliances being connected to our Nokias was impossible to comprehend. But things have changed now and almost every week we seem to be seeing another day-to-day item now connected to the internet. Twitter accounts like @internetofShit are dedicated to pointing out every random item that is now connected to the internet; from smart wallets to video linked toothbrushes to DRM infused wine bottles, but the very real side to all the laughing and caution - For every connected device you connect to your network you’re giving attackers another potential hole to crawl through. This weekend, save 50% on some of our very best IoT titles - or, if ones not enough pick up any 5 features products for $50! Start exploring here. IoT security has simply not been given much attention by companies. Last year two security researchers managed to wirelessly hack into a Jeep Cherokee, first by taking control of the entertainment system and windshield wipers before moving on to disable the accelerator; just months earlier a security expert managed to take over and force a plane to fly sideways by making a single engine go into climb mode. In 2013 over 40 million credit card numbers were taken from US retailer Target after hackers managed to get into the network via the AC company that worked with the retailer. The reaction to these events was huge, along with the multitude of editorials wondering how this could happen… when security experts were wondering in turn how it took so long. The problem until recently was that the IoT was seen mostly as a curio – a phone apps that turns your light on or sets the kettle at the right time was seen as a quaint little toy to mess around with for a bit, it was hard for most to fully realize how it could tear a massive hole in your network security. Plus the speed of which these new gadgets are entering the market is becoming much faster, what used to take 3-4 years to reach the market is now taking a year or less to capitalize on the latest hype; Kickstarter projects by those new to business are being sent out into the world, homebrew is on the rise. To give an example of how this landscape could affect us the French technology institute Eurecom downloaded some 32,000 firmware images from potential IoT device manufacturers and discovered 38 vulnerabilities across 123 products. These products were found in at least 140K devices accessible over the internet. Now imagine what the total number of vulnerabilities across all IoT products on all networks is, the potential number is scarily huge. The wind is changing slowly. In October, the IoT Security Summit is taking place in Boston, with speakers from both the FBI and US Homeland Security playing prominent roles as Speakers. Experts are finally speaking up about the need to properly secure our interconnected devices. As the IoT becomes mainstream and interconnected devices become more affordable to the general public we need to do all we can to ensure that potential security cracks are filled as soon as possible; every new connection is a potential entrance for attackers to break in and many people simply have little to no knowledge of how to improve their computer security. While this will improve as time goes on companies and developers need to be proactive in their advancement of IoT security. Choosing not to do so will mean that the IoT will become less of a tech revolution and more of a failure left on the wayside.
Read more
  • 0
  • 0
  • 14010

article-image-one-shot-learning-solution-low-data-problem
Savia Lobo
04 Dec 2017
5 min read
Save for later

One Shot Learning: Solution to your low data problem

Savia Lobo
04 Dec 2017
5 min read
The fact that machines are successful in replicating human intelligence is mind-boggling. However, this is only possible if machines are fed with correct mix of algorithms, huge collection of data, and most importantly the training given to it, which in turn leads to faster prediction or recognition of objects within the images. On the other hand, when you train humans to recognize a car for example, you simply have to show them a live car or an image. The next time they see any vehicle, it would be easy for them to distinguish a car among-st other vehicles. In a similarly way, can machines learn with single training example like humans do? Computers or machines lack a key part that distinguishes them from humans, and that is, ‘Memory’. Machines cannot remember; hence it requires millions of data to be fed in order to understand the object detection, be it from any angle. In order to reduce this supplement of training data and enabling machines to learn with less data at hand, One shot learning is brought to its assistance. What is one shot learning and how is it different from other learning? Deep Neural network models outperform various tasks such as image recognition, speech recognition and so on. However, such tasks are possible only due to extensive, incremental training on large data sets. In cases when there is a smaller dataset or fewer training examples, a traditional model is trained on the data that is available. During this process, it relearns new parameters and incorporates new information, and completely forgets the one previously learned. This leads to poor training or catastrophic inference. One shot learning proves to be a solution here, as it is capable of learning with one, or a minimal number of training samples, without forgetting. The reason for this is, they posses meta-learning; a capability often seen in neural network that has memory. How One shot learning works? One shot learning strengthens the ability of the deep learning models without the need of a huge dataset to train on. Implementation of One shot learning can be seen in a Memory Augmented Neural Network (MANN) model. A MANN has two parts, a controller and an external memory model. The controller is either a feed forward neural network or an LSTM (Long Short Term Memory) network, which interacts with the external memory module using number of read/write heads. These heads fetch or place representations to and fro the memory. LSTMs are proficient in long term storage through slow updates of weights and short term storage via the external memory module. They are trained to meta-learn; i.e. it can rapidly learn unseen functions with fewer data samples.Thus, MANNs are said to be capable of metalearning. The MANN model is later trained on datasets that include different classes with very few samples. For instance, the Omniglot dataset, a collection of handwritten samples of different languages, with very few samples of each language. After continuously training the model with thousands of iterations by using few samples, the model was able to recognize never-seen-before image samples, taken from a disjoint sample of the Omniglot dataset. This proves that MANN models are able to outperform various object categorization tasks with minimal data samples.   Similarly, One shot learning can also be achieved using Neural Turing Machine and Active One shot learning. Therefore, learning with a single attempt/one shot actually involves meta-learning. This means, the model gradually learns useful representations from the raw data using certain algorithms, for instance, the gradient descent algorithm. Using these learnings as a base knowledge, the model can rapidly cohere never seen before information with a single or one-shot appearance via an external memory module. Use cases of One shot learning Image Recognition: Image representations are learnt using supervised metric based approach. For instance,  siamese neural network, an identical sister network, discriminates between the class-identity of an image pair. Features of this network are reused for one-shot learning without the need for retraining. Object Recognition within images: One shot learning allows neural network models to recognize known objects and its category within an image. For this, the model learns to recognize the object with a few set of training samples. Later it compares the probability of the object to be present within the image provided. Such a model trained on one shot can recognize objects in an image despite the clutter, viewpoint, and lighting changes.   Predicting accurate drugs: The availability of datasets for a drug discovery are either limited or expensive. The molecule found during a biological study often does not end up being a drug due to ethical reasons such as toxicity, low-solubility and so on. Hence, a less amount of data is available about the candidate molecule. Using one shot learning, an iterative LSTM combined with Graph convolutional neural network is used to optimize the candidate molecule. This is done by finding similar molecules with increased pharmaceutical activity and lesser risks to patients. A detailed explanation of how using low data, accurate drugs can be predicted is discussed in a research paper published by the American Chemical Society(ACS). One shot learning is in its infancy and therefore use cases can be seen in familiar applications such as image and object recognition. As the technique will advance with time and the rate of adoption, other applications of one shot learning will come into picture. Conclusion One shot learning is being applied in instances of machine learning or deep learning models that have less data available for their training. A plus point in future is, that organizations will not have to collect huge amount of data for their ML models to be trained, only a few training samples would do the job! Large number of organizations are looking forward to adopt one shot learning within their deep learning models. It would be exciting to see how one shot learning will glide through being the base of every neural network implementation.  
Read more
  • 0
  • 0
  • 13975
article-image-4-ways-artificial-intelligence-leading-disruption-fintech
Pravin Dhandre
23 Nov 2017
6 min read
Save for later

4 ways Artificial Intelligence is leading disruption in Fintech

Pravin Dhandre
23 Nov 2017
6 min read
In the digital disruption era, Artificial Intelligence in Fintech is viewed as an emerging technology forming the sole premise for revolution in the sector. Tech giants positioned in the Fortune’s 500 technology list such as Apple, Microsoft, Facebook are putting resources in product innovations and technology automation. Businesses are investing hard to bring agility, better quality and high end functionality for driving their revenue growth by multi digits. Widely used AI-powered applications such as Virtual Assistants, Chatbots, Algorithmic Trading and Purchase Recommendation systems are fueling up the businesses with low marginal costs, growing revenues and providing a better customer experience. According to a survey, by National Business Research Institute, more than 62% of the companies will deploy AI powered fintech solutions in their applications to identify new opportunities and areas to scale the business higher. What has led the disruption? The Financial sector is experiencing a faster technological evolution right from providing personalized financial services, executing smart operations to simplify the complex and repetitive process. Use of machine learning and predictive analytics has enabled financial companies to provide smart suggestions on buying and selling stocks, bonds and commodities. Insurance companies are accelerating in automating their loan applications, thereby saving umpteen number of hours. Leading Investment Bank, Goldman Sachs automated their stock trading business replacing their trading professionals with computer engineers. Black Rock, one of the world’s largest asset management company facilitates high net worth investors with automated advice platform superseding highly paid wall street professionals. Applications such as algorithmic trading, personal chatbots, fraud prevention & detection, stock recommendations, and credit risk assessment are the ones finding their merit in banking and financial services companies.   Let us understand the changing scenarios with next-gen technologies: Fraud Prevention & Detection Fraud prevention is tackled by the firms using an anomaly detection API. The API is designed using machine learning & deep learning mechanism. It helps identify and report any suspicious or fraudulent activity taking place among-st billions of transactions on a daily basis. Fintech companies are infusing huge capital to handle cyber-crime, resulting into a global market spends of more than 400 billion dollars annually. Multi-national giants such as MasterCard, Sun Financial, Goldman Sachs, and Bank of England use AI-powered systems to safeguard and prevent money laundering, banking frauds and illegal transactions. Danske Bank, a renowned Nordic-based financial service provider, deployed AI engines in their operations helping them investigate millions of online banking transactions in less than a second. With this, cost of fraud investigation and delivering faster actionable insights reduced drastically. AI Powered Chatbots Chatbots are automated customer support chat applications powered by Natural Language Processing (NLP). They help deliver quick, engaging, personalized, and effective conversation to the end user. With an upsurge in the number of investors and varied investment options, customers seek financial guidance, profitable investment options and query resolution, faster and in real-time. Large number of banks such as Barclays, Bank of America, JPMorgan Chase are widely using AI-supported digital Chatbots to automate their client support, delivering effective customer experience with smarter financial decisions. Bank of America, the largest bank in US launched Erica, a Chatbot which guides customers with investment option notification, easy bill payments, and weekly update on their mortgage score.  MasterCard offers a chatbot to their customers which not only allows them to review their bank balance or transaction history but also facilitates seamless payments worldwide. Credit Risk Management For money lenders, the most common business risk is the credit risk and that piles up largely due to inaccurate credit risk assessment of borrowers. If you are unaware of the term credit risk, it is simply a risk associated with a borrower defaulting to repay the loan amount. AI backed Credit Risk evaluation tools developed using predictive analytics and advanced machine learning techniques has enabled bankers and financial service providers to simplify the borrower’s credit evaluation thereby transforming the labor intensive scorecard assessment method. Wells Fargo, an American international banking company adopted AI technology in executing mortgage verification and loan processing. It resulted in lower market exposure risk of their lending assets. With this, the team was able to establish smarter and faster credit risk management functionality. It resulted in analysis of millions of structured and unstructured data points for investigation thereby proving AI as an extremely valuable asset for credit security and assessment. Algorithmic Trading More than half a dozen US citizens own individual stocks, mutual funds, and exchange-traded mutual funds. Also, a good number of users trade on a daily basis, making it imperative for major broking and financial trading companies to offer AI powered algorithmic trading platform. The platform enables customers with strategic execution of trades offering significant returns. The algorithms analyse hundreds of millions of data pointers and draw down a decisive trading pattern enabling traders to book higher profits every microsecond of the trading hour. France-based international bank BNP Paribas deployed algorithmic trading which aids their customers in executing trades strategically and provides graphical representation of stock market liquidity. With the help of this, customers are able to determine the most appropriate ways of executing trade under various market conditions. The advances in automated trading has assisted users with suggestions and rich insights, helping humans to take better decisions. How do we see the Future of AI in Financial sector? The influence of AI in fintech has marked disruption in almost each and every financial institution, right from investment banks to retail banking, to small credit unions. Data science and machine learning practitioners are endeavoring to position AI as an essential part of the banking ecosystem. Financial companies are synergizing with data analytics and fintech professionals to orient AI as the primary interface for interaction with their customers. However, the sector commonly faces challenges in adoption of emerging technologies, making it inevitable for AI too. The foremost challenge companies face is availability of massive data which is clean and rich to train machine learning algorithms. The next hurdle in line would be the reliability and accuracy of the data insights provided by the AI mechanized solution. With dynamic market situation, businesses could experience decline in efficacy of their models causing serious harm to the company. Hence, they need to be smarter and cannot solely trust the AI technology in achieving the business mission. Absence of emotional intelligence in Chatbots is another area of concern resulting in an unsatisfactory customer service experience. While there may be other roadblocks, the rising investment in AI technology would definitely assist financial companies in overcoming such challenges and developing competitive intelligence in their product offerings. Predicting the near future, adoption of cutting edge technologies such as machine learning and predictive analytics will boost higher customer engagement, exceptional banking experience, lesser frauds and higher operating margins for banks, financial institutions and Insurance companies.
Read more
  • 0
  • 0
  • 13948

article-image-what-makes-programming-languages-simple-or-complex
Antonio Cucciniello
12 Jun 2017
4 min read
Save for later

What makes programming languages simple or complex?

Antonio Cucciniello
12 Jun 2017
4 min read
Have you been itching to learn a new programming language? Maybe you want to learn your first programming language and don't know what to choose. When learning a new language (especially your first language) you want to minimize the amount of unknowns that you will have. So you may want to choose a programming language that is simpler. Or maybe you are up for a challenge and want to learn something difficult! Today we are going to answer the question: What makes programming languages simple or complex? Previous experience The amount of experience you have programming or learning different programming concepts can greatly impact how well you learn a new language. If this is your tenth programming language you will have most likely seen plenty of the content before in the new language so that can greatly reduce the complexity. On the other hand, if this is your first language, you will be learning many new concepts and ideas that are natural to programming and that may make the language seem more complex than it probably is. Takeaway: The more programming experience you have, the lower the chances a programming language will be complex to you. Syntax The way you need to write code for that language can really affect the complexity. Some languages have many syntax rules that can be a nuisance when learning and will leave you confused. Other languages have fewer rules that will make it easier to understand for someone not familiar to the language. Additionally, for those with previous experience, if the new language has similar syntax to the old language it will help in the learning process. Another factor similar to syntax is how the code looks to the user. In my experience, the more the code has variable/function names that resemble the English language, the easier it is to understand it. Takeaway: The more syntax rules the more difficult a language can be to learn. Built-in functionality The next factor is how much built-in functionality a language has. If the language has been around for years and is being continuously updated, chances are it has plenty of helper functions and plenty of functionality. In the case of some newer languages, they might not have as much built-in functionality that allows you to develop easier. Takeaway: Generally, languages with more functionalitybuilt-in will make it easier to implement what you need in code. Concepts The fourth topic we are going to discuss here is concepts. That is, what programming concepts does this language use? There are plenty out there like object oriented programming, memory management, inheritance and more. Depending on what concepts are used in the language as well as your previous understanding of a concept, you could either really struggle with learning the language, or you could potentially find it easier than most. Takeaway: Your previous experience with specific concepts and the complexity of the concepts in the language could affect the complexity of the language as a whole. Frameworks & libraries Frameworks and libraries are very similar to built-in functionality. Frameworks and libraries are developed to make something in the language easier or to simplify a task that you would normally have to do yourself in code. So, with more frameworks you could make development easier than normal. Takeaway: If a language has plenty of support from libraries and frameworks, the language will decrease in complexity. Resources Our last topic here is arguably the most important. Ultimately, without high-quality documentation it can be very hard to learn a language. When looking for resources on a language, check out books, blog posts, tutorials, videos, documentation and forums for making sure there are plenty of resources on the topic. Takeaway: The more high-quality resources out there on the programming language, the easier it will be to learn.  When deciding on what programming languages are complex or simple, it truly depends on a few factors: your previous experience, the syntax of the language, built-in functionality, the concepts used, frameworks for support, and high-quality resources available. About the Author  Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello
Read more
  • 0
  • 0
  • 13937
Modal Close icon
Modal Close icon