Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-5g-trick-or-treat
Melisha Dsouza
31 Oct 2018
3 min read
Save for later

5G - Trick or Treat?

Melisha Dsouza
31 Oct 2018
3 min read
5G - or "fifth generation" - mobile internet is coming very soon - possibly early next year. It promises much faster data download speeds - 10 to 20 times faster than we have now. With an improvement in upload speeds, wider coverage and more stable connections, 5G is something to watch out for. Why are people excited about 5G? Mobile is today the main way people use the internet. That change has come at an amazing pace. With this increase in mobile users, demand for services, like music and video streaming, has skyrocketed.. This can cause particular problems when lots of people in the same area access online mobile services at the same time, leading to a congestion of existing spectrum bands, thus resulting in service breakdowns. 5G will use the radio spectrum much more efficiently, enabling more devices to access mobile internet services at the same time. But it’s not just about mobile users. It’s also about the internet of things and smart cities. For example, as cities look to become better connected, with everything from streetlights to video cameras in some way connected to the internet, this network will support this infrastructure in a way that would have previously been impossible. From swarms of drones carrying out search and rescue missions, yo fire assessments and traffic monitoring, 5G really could transform the way we understand and interact with our environment.  It’s not just about movies downloading faster, it’s also about autonomous vehicles communicating with each other seamlessly and reading live map and traffic data to take you to your destination in a more efficient and environmentally friendly way. 5G will also go hand-in-hand with AI, propagating its progress! 5G: trick or treat? All this being said, there will be an increase in cost to employ skilled professionals to manage 5G networks. Users will also need to buy new smartphones that support this network - even some of the most up to date phones will need to be replaced. When  4G was introduced in 2009/10, compatible smartphones came onto the market before the infrastructure had been rolled out fully. That’s a possibility with 5G, but it does look like it might take a little more time.. This technology is still under development and will take some time to be fully operational without any issues. We will leave it up to you decide if the technology is a Trick or a Treat! How 5G Mobile Data will propel Artificial Intelligence (AI) progress VIAVI releases Observer 17.5, a network performance management and diagnostics tool
Read more
  • 0
  • 0
  • 16511

article-image-data-science-windows-big-no
Aaron Lazar
13 Apr 2018
5 min read
Save for later

Data science on Windows is a big no

Aaron Lazar
13 Apr 2018
5 min read
I read a post from a Linkedin connection about a week ago. It read: “The first step in becoming a data scientist: forget about Windows.” Even if you’re not a programmer, that's pretty controversial. The first nerdy thought I had was, that’s not true. The first step to Data Science is not choosing an OS, it’s statistics! Anyway, I kept wondering what’s wrong with doing data science on Windows, exactly. Why is the legacy product (Windows), created by one of the leaders in Data Science and Artificial Intelligence, not suitable to support the very thing it is driving? As a publishing professional and having worked with a lot of authors, one of the main issues I’ve faced while collaborating with them is the compatibility of platforms, especially when it comes to sharing documents, working with code, etc. At least 80 percent of the authors I’ve worked with have been using something other than Windows. They are extremely particular about the platform they’re working on, and have usually chosen Linux. I don’t know if they consider it a punishable offence, but I’ve been using Windows since I was 12, even though I have played around with Macs and machines running Linux/Unix. I’ve never been affectionately drawn towards those machines as much as my beloved laptop that is happily rolling on Windows 10 Pro. Why is data science on Windows is a bad idea? When Microsoft created Windows, its main idea was to make the platform as user friendly as possible, and it focused every ounce of energy on that and voila! They created one of the most simplest operating systems that one could ever use. Microsoft wanted to make computing easy for everyone - teachers, housewives, kids, business professionals. However, they did not consider catering to the developer community as much as its users. Now that’s not to say that you can’t really use a Windows machine to code. Of course, you can run Python or R programs. But you’re likely to face issues with compatibility and speed. If you’re choosing to use the command line, and something goes wrong, it’s a real PITA to debug on Windows. Also, if you’re doing cluster computing with other Linux/Macs, it’s better to have one of them yourself. Many would agree that Windows is more likely to suffer a BSoD (Blue Screen of Death) than a Mac or a Unix machine, messing up your algorithm that’s been running for a long time. [box type="note" align="" class="" width=""]Check out our most read post 15 useful Python libraries to make your Data science tasks easier. [/box] Is it all that bad? Well, not really. In fact, if you need to pump in a couple more gigs of RAM, you can’t think of doing that on a Mac. Although you might still encounter some weird stuff like those mentioned above, on a Windows PC, you can always Google up a workaround. Don’t beat yourself up if you own a PC. You can always set up a dual boot, running a Linux distribution parallely. You might want to check out Vagrant for this. Also, you’ll be surprised if you’re a Mac owner and you plan some heavy duty Deep Learning on a GPU, you can’t really run CUDA without messing things up. CUDA will only work well with NVIDIAs GPUs on a PC. In Joey Tribbiani's words “This is a moo point.” To me, data science is really OS agnostic. For instance, now with Docker, you don’t really have to worry much about which OS you’re running - so from that perspective, data science on Windows may work for you. Still feel for Windows? Well, there are obviously drawbacks. You’ll still keep living with the fear of isolation that Microsoft tries to create in the minds of customers. Moreover, you’ll be faced with “slowdom” if that’s a word, what with all the background processes eating away your computing power! You’ll be defying everything that modern computing is defined by - KISS, Open Source, Agile, etc. Another important thing you need to keep in mind is that when you’re working with so much data, you really don’t wanna get hacked! Last but not the least, if you’re intending to dabble with AI and Blockchain, your best bet is not going to be Windows. All said and done, if you’re a budding data scientist who’s looking to buy some new equipment, you might want to consider a few things before you invest in your machine. Think about what you’ll be working with, what tools you might want to use and if you want to play safe, it’s best to go with a Linux system. If you have the money and want to flaunt it, while still enjoying support from most tools, think about a Mac. And finally, if you’re brave and are not worried about having two OSes running on your system, go in for a Windows PC. So the next time someone decides to gift you a Windows PC, don’t politely decline right away. Grab it and swiftly install a Linux distro! Happy coding! :) *I will put an asterisk here, for the thoughts put in this article are completely my personal opinion and it might differ from person to person. Go ahead and share your thoughts in the comments section below.
Read more
  • 0
  • 4
  • 16417

article-image-jupyter-data-laboratory-part-1
Marijn van
18 May 2016
5 min read
Save for later

Jupyter as a Data Laboratory: Part 1

Marijn van
18 May 2016
5 min read
This is part one of a two-part piece on Jupyter, a computing platform used my many scientists to perform their data analysis and modeling. This first part will help you understand what Jupyter is, and the second part will cover why it represents a leap forward in scientific computing. Jupyter: a data laboratory If you think that scientists, famous for being careful and precise, always produce well-documented, well-tested, and beautiful code, you are dead wrong. More often than not, a scientist's local code folder is an ill-organized heap of horrible spaghetti code that will give any seasoned software developer nightmares. But the scientist will sleep soundly. That is because usually, the sort of programming that scientists do is a lot different from software development. They tend to write programming code for a whole different purpose, with a whole different mindset, and with a whole different approach to computing. If you have never done scientific computing before—by which I mean you have never used your computer to analyze measurement data or to "do science"—then leave your TDD, SCRUM, agile, and so on at the door and come join me for a little excursion into Jupyter. The programming language is your user interface Over the years, programmers have created applications to cover most computing needs of most users. In domains such as content creation, communication, and entertainment, chances are good that someone already wrote an application that does what you want to do. If you're lucky, there's even a friendly GUI to help guide you through the process. But in science, the point is usually to try something that nobody has done before. Hence, any application used for data analysis needs to be flexible. The application has to enable the user to do, well, anything imaginable, with a dataset; and the GUI paradigm breaks down. Instead of presenting the user with a list of available options, it becomes more efficient to just ask the user what needs to be accomplished. When driven to the extreme, you end up dropping the whole concept of an application and working directly with a programming language. So it is understandable that when you start Jupyter, you are staring at a mostly blank screen with a blinking cursor. Realize that behind that blinking cursor sits the considerable computational power your computer—most likely a multicore processor, gigabytes of RAM, and terabytes of storage space, awaiting your command. In many domains, a programming language is used to create an application, which in turn presents you with an interface to do the operation you wanted to do in the first place. In scientific computing, however, the programming language is your interface. The ingredients of a data laboratory I think of Jupyter as a data laboratory. The heart of a data laboratory is a REPL (a read-eval-print loop, which allows you to enter lines of programming code that immediately get executed, and the result is displayed on the screen). The REPL can be regarded as a workbench, and loading a chunk of data into working memory can be regarded as placing a sample on it, ready to be examined. Jupyter offers several advanced REPL environments, most notably IPython, which runs on your terminal and also ships with its own tricked out terminal to display inline graphics and offer easier copy-paste. However, the most powerful REPL that Jupyter offers runs in your browser, allowing you to use multiple programming languages at the same time and embed inline markdown, images, videos, and basically anything the browser can render. The REPL allows access to the underlying programming language. Since the language acts as our primary user interface, it needs to get out of our way as much as possible. This generally means it should be high-level with terse syntax and not be too picky about correctness. And of course, it must support an interpreted mode to allow a quick back-and-forth between a line of code and the result of the computation. Of the multitude of programming languages supported by Jupyter, it ships with Python by default, which fulfills the above requirements nicely. In order to work with the data efficiently (for example, to get it onto your workbench in the first place), you'll want software libraries (which can be regarded as shelves that hold various tools like saws, magnifiers, and pipettes). Over the years, scientists have contributed a lot of useful libraries to the Python ecosystem, so you can have your pick of favorite tools. Since the APIs that are exposed by these libraries are as much a part of the user interface as the programming language, a lot of thought gets put into them. While executing single lines or blocks of code to interactively examine your data is essential, the final ingredient of the data laboratory is the text editor. The editor should be intimately connected to the REPL and allow for a seamless transmission of text between the two. The typical workflow is to first try a step of the data analysis live in the REPL and, when it seems to work, write it down into a growing analysis script. More complicated algorithms are written in the editor first in an iterative fashion, testing the implementation by executing the code in the REPL. Jupyter's notebook environment is notable in this regard, as it blends the REPL and the editor together. Go check it out If you are interested in learning more about Jupyter, I recommend installing it and checking out this wonderful collection of interesting Jupyter notebooks. About the author Marijn van Vliet is a postdoctoral researcher at the Department of Neuroscience and Biomedical Engineering of Aalto University in Finland. He received his PhD in biomedical sciences in 2015.
Read more
  • 0
  • 0
  • 16405

article-image-what-the-future-holds-for-privacy-its-got-artificial-intelligence
Guest Contributor
21 Aug 2018
8 min read
Save for later

Do you want to know what the future holds for privacy? It’s got Artificial Intelligence on both sides.

Guest Contributor
21 Aug 2018
8 min read
AI and machine learning are quickly becoming integral parts of modern society. They’ve become common personal and household objects in this era of the Internet of Things. No longer are they relegated to the inner workings of gigantic global corporations or military entities. AI is taking center stage in our very lives and there’s little we can do about it. Tech giants like Google and Amazon have made it very easy for anyone to get their hands on AI-based technology in the form of AI assistants and a plethora of MLaaS (machine-learning-as-a-service) offerings. These AI-powered devices can do anything like telling you the weather, finding you a recipe for your favorite pasta dish, and even letting you know your friend Brad is at the door- and opening that door for you. What’s more, democratized AI tools make it easy for anyone (even without coding experience) to try their hands on building machine learning based apps. Needless to say, a future filled with AI is a future filled with convenience. If Disney’s film “Wall-e” was any hint, we could spend our whole lives a chair while letting self-learning machines do everything we need to do for us (even raising our kids). However, the AI of today could paint an entirely different picture of the future for our privacy. The price of convenience Today’s AI is hungry for your personal information. Of course, this isn’t really surprising seeing as they were birthed by companies like Google that makes most of its yearly income from ad revenue. In one article written by Gizmodo, a privacy flaw was found in Google’s then newest AI creation. The AI assistant would be built into every Google Pixel phone and would run on their messenger app “Allo”. Users could simply ask the assistant questions like “what’s the weather like tomorrow” or “how do I get to Brad’s house”. Therein lies the problem. In order for an AI assistant to adjust according to your own personal preferences, it has to first learn and remember all of your personal information. Every intimate detail that makes you, you. It does this by raking in all the information stored in your device (like your contacts list, photos, messages, location). This poses a huge privacy issue since it means you’re sharing all your personal information with Google (or whichever company manufactures your AI-driven assistant). In the end, no one will know you better than yourself- except Google. Another problem with this AI is that it can only work if your message is unencrypted. You can either opt for more privacy by choosing to use the built-in end-to-end encrypted mode or opt for more convenience by turning off encrypted mode and letting the AI read/listen to your conversations. There is no middle ground yet. Why is this such a big problem? Two reasons: Companies, like Google, use or sell your private information to third parties to make their money; and Google isn’t exactly the most trustworthy with users’ secrets. If your AI manufacturer behaves like Google, that privacy policy that you’re relying on will mean nothing once the government starts knocking on their door. VPNs vs AI How AI learns from your personal information is just the tip of the iceberg. There’s a deeper privacy threat looming just behind the curtain: bad actors waiting to use AI for their own nefarious purposes. One study compared human hackers with artificial hackers to see who could get more Twitter users to click on malicious phishing links. The results showed that artificial hackers substantially outperformed their human counterparts. The artificial hacker pumped out more spear-phishing tweets that resulted in more conversions. This shows how powerful AI can be once it’s weaponized by hackers. Hackers may already be using AI right now- though it’s still hard to tell. Users are not without means to defend themselves, though. VPNs have long been used as a countermeasure against hackers. The VPN industry has even grown due to the recent problems regarding user data and personal information like the Facebook-Cambridge Analytica scandal and how the EU’s GDPR effectively drove many websites to block IPs from the EU. A VPN (Virtual Private Network) protects your privacy by masking your IP. It also routes your internet traffic through secure tunnels where it is encrypted. Most VPNs on the market currently use military-grade 256-bit AES to encrypt your data along with a multitude of various security features. The problem is that anyone with the time and resources can still break through your VPN’s defense- especially if you’re a high profile target. This can either be done by getting the key through some nefarious means or by exploiting known vulnerabilities to break into the VPN’s encryption. Breaking a VPN’s encryption is no easy task as it will take lots of computation and time- we’re talking years here. However, with the rise of AI, the process of breaking a VPN’s encryption may have become easier. Just 2 years ago, DARPA, the US government agency that commissions research for the US Department of Defense, funded the Cyber Grand Challenge. Here, computers were pitted against each other to find and fix bugs in their systems. The winner, a computer named “Mayhem” created by a team named “ForAllSecure”, took home the $2 million prize. It achieved its goal by not only patching any holes it found in its own system but also by finding and exploiting holes in its opponents’ software before they could be patched. Although the whole point of the challenge was to speed up the development of AI to defend against hackers, it also showed just how powerful an artificial hacker can be. A machine that could quickly process heaps and heaps of data while developing more ways to defend/attack from its own processes is a double-edged sword. This is why some VPN companies have started incorporating AI to defend against hackers- human or otherwise. The future of VPNs is AI augmented “If you can’t beat them, join them.” One VPN that has started using AI as part of their VPN service is Perfect Privacy. Their AI takes the form of Neuro routing (AI-based routing). With this, the AI makes a connection based on where the user is connecting to. The AI chooses the closest server to the destination server and does so separately for all connections. This means that if you’re in Romania but you’re connecting to a website hosted in New York, the VPN will choose a New York-based location as an exit server. This not only reduces latency but also ensures that all traffic remains in the VPN for as long as possible. This also makes the user appear to have different IPs on different sites which only bolsters privacy even more. Also, because the AI is dynamic in its approach, it frequently changes its route to be the shortest route possible. This makes its routes nigh impossible to predict. If you’d like a more detailed look at Perfect Privacy and its AI-based routing, check out this Perfect Privacy review. Some experts believe that someday in the future, we may just let AI handle our security in the Internet of Things for us. Just recently this year, a wireless VPN router called “Fortigis” was released and touted AI-based defenses. The router uses self-learning AI to keep your connection safe by learning from attack attempts made on any Fortigis router. All devices are then updated to defend against such attacks thereby ensuring up-to-date security. It also allows you to control who can connect to your home network, alarms you when someone is connecting and informs you of all the devices connected to your home network. These are just some of the ways the VPN industry is keeping up with the security needs of the times. Who knows what else the future could bring just around the corner. Whatever it is, one thing is for sure: Artificial intelligence will be a big part of it. About Author Dana Jackson, an U.S. expat living in Germany and the founder of PrivacyHub. She loves all things related to security and privacy. She holds a degree in Political Science, and loves to call herself a scientist. Dana also loves morning coffee and her dog Paw.   10 great tools to stay completely anonymous online Guide to safe cryptocurrency trading
Read more
  • 0
  • 0
  • 16383

article-image-best-automation-tools-sysadmins
Rick Blaisdell
09 Aug 2017
4 min read
Save for later

The best automation tools for sysadmins

Rick Blaisdell
09 Aug 2017
4 min read
Artificial Intelligence and cognitive computing have made huge strides over the past couple of years. Today, software automation has become an important tool that provides businesses with the necessary assets to keep up with market competition. Just take a look at ATMs, which have replaced bank tellers, or smart apps that have brought airline boarding passes to your fingertips. Moreover, as some statistics reveal, in the next couple of years, around 45 percent of work activities will be replaced or affected by robotic process automation. However, this is not the focus of this post. Our focus here is about how automation is helping us to keep up the pace and streamline our activities. So let’s take a look at system administrators. There are plenty of tasks performed by sysadmins that could easily be automated. To make the job easier, here is a list of automation software that any system administrator would be interested in: WPKG – The automated software deployment, upgrade, and removal program that allows you to build dependency trees of applications. The tool runs in the background and it doesn’t need any user interaction. The WPKG tool can be used to automate Windows 8 deployment tasks, so it’s good to have in any toolbox. AutoHotkey– The open-source scripting language for Microsoft Windows that allows you to create mouse macros manually. One of the most advantageous features that this tool provides is the ability to create stand-alone, fully executable .exe files, from any script, and operates on other PCs. Puppet Open Source – I think every IT professional has heard about Puppet and how it has captured the market during the last couple of years. This tool allows you to automate your IT infrastructure from acquisition to provisioning and management stages. The advantages? Scalability and scope!  As I mentioned, automation has already started to change the way we do business, and it will continue doing so in the upcoming years. It can represent a strong motive to increase service to your end users. Let’s take a dive into the benefits of automation: Reliability – This might be considered one of the highest advantages that automation can provide to an organization. Let’s take computer operations as an example. It requires a professional with both technical skills, and agility in pressing buttons and other physical operations. Also, we all know that human error is one of the most common problems in any business. Automation removes these errors. System Performance – Every business wishes to improve performance.  Automation, due to its flexibility and agility, makes that possible. Productivity – Today, we rely on computers and, most of the time, we work on complex tasks. One of the perks that automation has to offer is to increase productivity using job-scheduling software. It eliminates the lag time between jobs, while minimizing operator intervention. Availability – We all know how far cloud computing has come and how much hours of unavailability can cost. Automation can help by delivering high availability. Let’s take, for example, service interruptions. If your system crashes, and automated back-ups are available, then you have nothing to worry about. Automated recovery will always play a huge role in business continuity. Automation will always be in our future, and it will continue to grow. The key to using it to its full potential is to understand how it works and how it can help our business, regardless of the industry, as well as finding the best software to maximize efficiency. About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies developing innovative technology strategies.
Read more
  • 0
  • 0
  • 16350

article-image-theres-another-player-in-the-advertising-game-augmented-reality
Guest Contributor
10 Jul 2018
7 min read
Save for later

There’s another player in the advertising game: augmented reality

Guest Contributor
10 Jul 2018
7 min read
Customer purchase does not necessarily depend on the need for the product; instead, it often depends on how well the product has been advertised. Most advertising companies target customer emotions and experiences to sell their product. However, with the increasing online awareness, intrusive ads and an oversaturated advertising space, customers rely more on online reviews before purchasing any product. Companies have to think out-of-the-box to get the customers engaged with their product! Augmented Reality can help companies fetch their audience back by creating an interactive buying experience on their device that converts their casual browsing activity into a successful purchase. It is estimated that there are around 4 billion users in the world who are actively engaged on the internet. This shows that over half of the world’s population is active online which means having an online platform will be beneficial, but there’s a large audience that requires engaging within the right way because it’s becoming the norm. For now, AR is still fairly new in the advertising world but it’s expected that by 2020, AR revenue will outweigh VR (Virtual Reality) by about $120 billion and it’s no surprise this is the case. Ways AR can benefit businesses There are many reasons why AR could be beneficial to a business: Creates Emotional Connection AR provides the platform for advertising companies to engage with their audiences in a unique way, using an immersive advertisement to create a connection that brings the consumers emotions into play. A memorable experience encourages them to make purchases because psychologically, it was an experience that they’ve had like no other and one they’re unlikely to get elsewhere. It can also help create exposure. Because of the excitement that users had, they’ll encourage others to try it too. Saves Money It’s amazing to think that such advanced technology can be cheaper than your traditional method of advertising. Print advertising can still be an extremely expensive method in many cases given that it is a high volume game and due to the costs of applying an ad on the front page of a publication. AR ads can vary depending on the quality but even some of the simplest forms of AR advertising can be affordable. Increases Sales Not only is AR a useful tool for promoting goods and services, but it also provides the opportunity to increase conversions. One issue that many customers have is whether the goods they are purchasing are right for them. AR removes this barrier and enables them to ‘try out’ the product before they purchase, making it more likely for the customer to buy. Examples of AR advertising Early adopters have already taken up the technology for showcasing their services and products. It’s not mainstream yet but as the above figures suggest, it won’t be long before AR becomes widespread. Here are a few examples of companies using AR technology in their marketing strategy. IKEA’s virtual furnitures IKEA is the famous Swedish home retailer who adopted the technology back in 2013 for their iOS app. Their idea allowed potential purchasers to scan their catalogue with their mobile phone and then browse their products through the app. When they selected something that they think might be suitable for their home they could see the virtual furniture through their app or tablet in their living space. This way customers could judge whether it was the right product or not. Pepsi Max’s Unbelievable Campaign Pepsi didn’t necessarily use the technology to promote their product directly but instead used it to create a buzz for the brand. They installed screens into an everyday bus shelter in London and used it to layer virtual images over a real-life camera. Audiences were able to interact with the video in the bus shelter through the camera that was installed on the bus shelter. The video currently has over 8 million views on Youtube and several shares have been made through social networks. Lacoste’s virtual trial room on a marker Lacoste launched an app that used marker-based AR technology where users were able to stand on a marker in the store that allowed them to try on different LCST branded trainers. As mentioned before, this would be a great way for users to try on their apparel before deciding whether to purchase it. Challenges businesses face with integrating AR into their advertising plan Although AR is an exciting prospect for businesses and many positives can be taken from implementing it into advertising plans, it has its fair share of challenges. Let’s take a brief look into what these could be. Mobile Application is required AR requires a specific type of application in order to work. For consumers to engage themselves within an AR world they’ll need to be able to download the specific app to their mobile first. This means that customers will find themselves downloading different applications for the companies that released their app. This is potentially one of the reasons why some companies have chosen not to invest in AR, yet. Solutions like augmented reality digital placement (ARDP) are in the process of resolving this problem. ARDP uses media-rich banners to bring AR to life in a consumer’s handheld device without having to download multiple apps. ARDP would require both AR and app developers to come together to make AR more accessible to users. Poor Hardware Specifications Similar to video and console games, the quality of graphics on an AR app greatly impacts the user experience. If you think of the power that console systems output, if a user was to come across a game they played that had poor graphics knowing the console's capabilities, they will be less likely to play it. In order for it to work, the handheld device would need enough hardware power to produce the ideal graphics. Phone companies such as Apple and Samsung have done this over time when they’ve released new phones. So in the near future, we should expect modern smartphones to produce top of the range AR. Complexity in the Development Phase Creating an AR advertisement requires a high level of expertise. Unless you have AR developers already in your in-house team, the development stage of the process may prove difficult for your business. There are AR software development toolkits available that have made the process easier but it still requires a good level of coding knowledge. If the resources aren’t available in-house, you can either seek help from app development companies that have AR software engineering experience or you could outsource the work through websites such as Elance, Upwork, and Guru. In short, the development process in ad creation requires a high level of coding knowledge. The increased awareness of the benefits of implementing AR advertising will alert developers everywhere and should be seen as a rising opportunity. We can expect an increase in demand for AR developers as those who have the expertise in the technology will be high on the agenda for many advertising companies and agencies who are looking to take advantage of the market to engage with their customers differently. For projects that involve AR development, augmented reality developers should be at the forefront of business creative teams, ensuring that the ideas that are created can be implemented correctly. [author title="About Jamie Costello"] Jamie Costello is a student and an aspiring freelance writer based in Manchester. His interests are to write about a variety of topics but his biggest passion concerns technology. He says, “When I'm not writing or studying, I enjoy swimming and playing games consoles”.[/author]   Read Next Adobe glides into Augmented Reality with Adobe Aero Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Apple’s new ARKit 2.0 brings persistent AR, shared augmented reality experiences and more    
Read more
  • 0
  • 0
  • 16330
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-exploring-language-improvements-c-72-and-73-0
Mark J.
28 Nov 2017
9 min read
Save for later

Exploring Language Improvements in C# 7.2 and 7.3

Mark J.
28 Nov 2017
9 min read
With the C# 7 generation, Microsoft has decided to increase the cadence of language releases, releasing minor version numbers, aka point releases, for the first time since C# 1.1. This allows new features to be used by programmers faster than ever before, but the policy poses a challenge to writers of books about C#. Introduction One of the hardest parts of writing for technology is deciding when to stop chasing the latest changes and adding new content. Back in March 2017, I was reviewing the final drafts of the second edition of my book, C# 7 and .NET Core – Modern Cross-Platform Development. In Chapter 2, Speaking C# I got to the topic of representing number literals. One of the improvements in C# 7 is the ability to use the underscore character as a digit separator. For example, when writing large numbers in decimal you can improve the readability of number literals using underscores, and you can express binary or hexadecimal number literals by prefixing the number literal with 0b or 0x, as shown in the following code: // C# 6 and earlier int decimalNotation = 2000000; // 2 million // C# 7 and 7.1 int decimalNotation = 2_000_000; // 2 million int binaryNotation = 0b0001_1110_1000_0100_1000_0000; // 2 million int hexadecimalNotation = 0x001E_8480; // 2 million But in the final draft I hadn't included code examples of using underscores in number literals. At the last minute, I decided to add the preceding examples to the book. Unfortunately, I assumed that the underscore could be used to separate the prefixes 0b and 0x from the digits, and did not check the code examples would compile until the following day, after the book had gone to print. I had to release an erratum on the book's web page before it even reached the shelves. I felt so embarrassed. In the third edition, C# 7.1 and .NET Core 2.0 – Modern Cross-Platform Development, I fixed the code examples by removing the unsupported underscores after the prefixes since they are not supported in C# 7 or C# 7.1. Ironically, just as the third edition was due to go to print, Microsoft released C# 7.2, which adds support for using an underscore after the prefixes, as shown in the following code: // C# 7.2 and later int binaryNotation = 0b_0001_1110_1000_0100_1000_0000; // 2 million int hexadecimalNotation = 0x_001E_8480; // 2 million Gah! Clearly, I wasn't the only programmer who thought it is natural to be able to use underscores after the 0b or 0x prefixes. For the third edition, I decided not to make any last-minute changes to the book. This was partly because I didn't want to risk making a mistake again, and also because the code examples do work, they just don't show the latest improvement. Maybe in the fourth edition I will finally get the whole book perfect! But, of course, in the programming world that's impossible. Since the third edition covers C# 7.1, I have written this article to cover the improvements in C# 7.2 that are available today, and to preview the improvements coming early in 2018 with C# 7.3. Enabling C# 7 point releases Developer tools like Visual Studio 2017, Visual Studio Code, and the dotnet command line interface assume that you want to use the C# 7.0 language compiler by default. To use the improvements in a C# point release like 7.1 or 7.2, you must add a configuration element to the project file, as shown in the following markup: <LangVersion>7.2</LangVersion> Potential values for the <LangVersion> markup are shown in the following table: LangVersion Description 7, 7.1, 7.2, 7.3, 8 Entering a specific version number will use that compiler if it has been installed. default Uses the highest major number without a minor number, for example, 7 in 2017 and 8 later in 2018. latest Uses the highest major and highest minor number, for example, 7.2 in 2017, 7.3 early in 2018, 8 later in 2018. To be able to use C# 7.2, either install Visual Studio 2017 version 15.5 on Windows, or install .NET Core SDK 2.1.2 on Windows, macOS, or Linux from the following link: https://www.microsoft.com/net/download/ Run the .NET Core SDK installer, as shown in the following screenshot: Setting up a project for exploring C# 7.2 improvements In Visual Studio 2017 version 15.5 or later, create a new Console App (.NET Core) project named ExploringCS72 in a solution named Bonus, as shown in the following screenshot: You can download the projects created in this article from the Packt website or from the following GitHub repository: https://github.com/PacktPublishing/CSharp-7.1-and-.NET-Core-2.0-Modern-Cross-Platform-Development-Third-Edition/tree/master/BonusSectionCode/Bonus In Visual Studio Code, create a new folder named Bonus with a subfolder named ExploringCS72. Open the ExploringCS72 folder. Navigate to View | Integrated Terminal, and enter the following command: dotnet new console In either Visual Studio 2017 or Visual Studio Code, edit the ExploringCS72.csproj file, and add the <LangVersion> element, as shown highlighted in the following markup: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.0</TargetFramework> <LangVersion>7.2</LangVersion> </PropertyGroup> </Project> Edit the Program.cs file, as shown in the following code: using static System.Console; namespace ExploringCS72 { class Program { static void Main(string[] args) { int year = 0b_0000_0111_1011_0100; WriteLine($"I was born in {year}."); } } } In Visual Studio 2017, navigate to Debug | Start Without Debugging, or press Ctrl + F5. In Visual Studio Code, in Integrated Terminal, enter the following command: dotnet run You should see the following output, which confirms that you have successfully enabled C# 7.2 for this project: I was born in 1972. In Visual Studio Code, note that the C# extension version 1.13.1 (released on November 13, 2017) has not been updated to recognize the improvements in C# 7.2. You will see red squiggle compile errors in the editor even though the code will compile and run without problems, as shown in the following screenshot: Controlling access to type members with modifiers When you define a type like a class with members like fields, you control where those members can be accessed by applying modifiers like public and private. Until C# 7.2, there have been five combinations access modifier keywords. C# 7.2 adds a sixth combination, as shown in the last row of the following table: Access modifier Description private Member is accessible inside the type only. This is the default if no keyword is applied to a member. internal Member is accessible inside the type, or any type that is in the same assembly. protected Member is accessible inside the type, or any type that inherits from the type. public Member is accessible everywhere. internal protected Member is accessible inside the type, or any type that is in the same assembly, or any type that inherits from the type. Equivalent to internal_OR_protected. private protected Member is accessible inside the type, or any type that inherits from the type and is in the same assembly. Equivalent to internal_AND_protected. Setting up a .NET Standard class library to explore access modifiers In Visual Studio 2017 version 15.5 or later, add a new Class Library (.NET Standard) project named ExploringCS72Lib to the current solution, as shown in the following screenshot: In Visual Studio Code, create a new subfolder in the Bonus folder named ExploringCS72Lib. Open the ExploringCS72Lib folder. Navigate to View | Integrated Terminal, and enter the following command: dotnet new classlib Open the Bonus folder so that you can work with both projects. In either Visual Studio 2017 or Visual Studio Code, edit the ExploringCS72Lib.csproj file, and add the <LangVersion> element, as shown highlighted in the following markup: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> <LangVersion>7.2</LangVersion> </PropertyGroup> </Project> In the class library, rename the class file from Class1 to AccessModifiers, and edit the class, as shown in the following code: using static System.Console; namespace ExploringCS72 { public class AccessModifiers { private int InTypeOnly; internal int InSameAssembly; protected int InDerivedType; internal protected int InSameAssemblyOrDerivedType; private protected int InSameAssemblyAndDerivedType; // C# 7.2 public int Everywhere; public void ReadFields() { WriteLine("Inside the same type:"); WriteLine(InTypeOnly); WriteLine(InSameAssembly); WriteLine(InDerivedType); WriteLine(InSameAssemblyOrDerivedType); WriteLine(InSameAssemblyAndDerivedType); WriteLine(Everywhere); } } public class DerivedInSameAssembly : AccessModifiers { public void ReadFieldsInDerivedType() { WriteLine("Inside a derived type in same assembly:"); //WriteLine(InTypeOnly); // is not visible WriteLine(InSameAssembly); WriteLine(InDerivedType); WriteLine(InSameAssemblyOrDerivedType); WriteLine(InSameAssemblyAndDerivedType); WriteLine(Everywhere); } } } Edit the ExploringCS72.csproj file, and add the <ItemGroup> element to reference the class library in the console app, as shown highlighted in the following markup: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.0</TargetFramework> <LangVersion>7.2</LangVersion> </PropertyGroup> <ItemGroup> <ProjectReference Include="..ExploringCS72LibExploringCS72Lib.csproj" /> </ItemGroup> </Project> Edit the Program.cs file, as shown in the following code: using static System.Console; namespace ExploringCS72 { class Program { static void Main(string[] args) { int year = 0b_0000_0111_1011_0100; WriteLine($"I was born in {year}."); } public void ReadFieldsInType() { WriteLine("Inside a type in different assembly:"); var am = new AccessModifiers(); WriteLine(am.Everywhere); } } public class DerivedInDifferentAssembly : AccessModifiers { public void ReadFieldsInDerivedType() { WriteLine("Inside a derived type in different assembly:"); WriteLine(InDerivedType); WriteLine(InSameAssemblyOrDerivedType); WriteLine(Everywhere); } } } When entering code that accesses the am variable, note that IntelliSense only shows members that are visible due to access control. Passing parameters to methods In the original C# language, parameters had to be passed in the order that they were declared in the method. In C# 4, Microsoft introduced named parameters so that values could be passed in a custom order and even made optional. But if a developer chose to name parameters, all of them had to be named. In C# 7.2, you can mix named and unnamed parameters, as long as they are passed in the correct position. In Program.cs, add a static method, as shown in the following code: public static void PassingParameters(string name, int year) { WriteLine($"{name} was born in {year}."); } In the Main method, add the following statement: PassingParameters(name: "Bob", 1945); Visual Studio Code will show an error, as shown in the following screenshot, but the code will compile and execute. Optimizing performance with value types The fourth and final feature of C# 7.2 is working with value types while using reference semantics. This can improve performance in very specialized scenarios. You are unlikely to use them much in your own code, unless like Microsoft themselves, you create frameworks for other programmers to build upon that need to do a lot of memory management. You can learn more about these features at the following link: https://docs.microsoft.com/en-gb/dotnet/csharp/reference-semantics-with-value-types Conclusion I plan to refresh this bonus article when C# 7.3 is released to update it with the new features in that point release. Good luck with all your C# adventures!
Read more
  • 0
  • 0
  • 16288

article-image-twilio-whatsapp-api-great-tool-reach-new-businesses
Amarabha Banerjee
15 Aug 2018
3 min read
Save for later

Twilio WhatsApp API: A great tool to reach new businesses

Amarabha Banerjee
15 Aug 2018
3 min read
The trend in the last few years have indicated that businesses want to talk to their customers in the same way they communicate with their friends and family. This enables them to cater to their specific need and to create customer centric  products. Twilio, a cloud and communication based platform has been at the forefront of creating messaging solutions for businesses. Recently, Twilio has enabled developers to integrate SMSing and calling facilities into their applications using the Twilio Web Services API. Over the last decade, Twilio customers have used Programmable SMS to build innovative messaging experiences for their users, whether it is sending instant transaction notifications for money transfers, food delivery alerts, or helping millions of people with the parking tickets. This latest feature added to the Twilio API integrates WhatsApp messaging into the application and manages messages and WhatsApp contacts with a business account. Why is the Twilio Whatsapp integration so significant? WhatsApp is one of the most popular instant messaging apps in the world presently. Everyday, 30 million messages are exchanged using WhatsApp. The visualization below shows the popularity of WhatsApp across different countries. Source: Twilio Integrating WhatsApp communications in the business applications would mean greater flexibility and ability to reach to a larger segment of audience. How is it done The operational overhead of integrating directly with the WhatsApp messaging network requires hosting, managing, and scaling containers in your own cloud infrastructure. This can be a tough task for any developer or business with a different end-objective and limited budget. The Twilio API makes it easier for you. WhatsApp delivers end-to-end message encryption through containers. These containers manage encryption keys and messages between the business and users. The containers need to be hosted in multiple regions for high availability and to scale efficiently, as messaging volume grows. Twilio solves this problem for you with a simple and reliable REST API. Other failsafe messaging features like: User opt-out options from WhatsApp messages Automatic switching to sms messaging in the absence of data network Shift to another messaging service in regions where WhatsApp is absent etc; can be implemented easily using the Twilio API. Also, you do not have to use separate APIs to get connected with different messaging services like Facebook messenger, MMS, RCS, LINE etc as all of them are possible within this API. WhatsApp is taking things at a slower pace currently. It initially allows you to develop a test application using the Twilio Sandbox for WhatsApp. This lets you to test your application first, and send messages to a limited number of users only. After your app gets production ready, you can create a WhatsApp business profile and get a dedicated Twilio number to work with WhatsApp. Source: Twilio With the added feature, Twilio enables you to leave aside the maintenance aspect of creating a separate WhatsApp integration service. Twilio takes care of the cloud containers and the security aspect of the application. It gives developers an opportunity to focus on creating customer centric products to communicate with them easily and efficiently. Make phone calls and send SMS messages from your website using Twilio Securing your Twilio App Building a two-way interactive chatbot with Twilio: A step-by-step guide
Read more
  • 0
  • 0
  • 16242

article-image-five-reasons-why-xamarin-will-change-mobile-development
Packt
13 Apr 2017
2 min read
Save for later

Five reasons why Xamarin will change mobile development

Packt
13 Apr 2017
2 min read
Find out why Xamarin will change mobile development Developers of enterprise desktop and web applications are all looking for ways to extend their apps to mobile platforms without starting from scratch or compromising user experience. Most developers and organizations are turning to cloud-first and mobile-first solutions, with speeed and agility in mind. After all, it's a competitive world out there and you need to give yourself the best chance of success without breaking too much of a sweat trying to deliver. There's a few options, but Xamarin is quickly becoming the cross-platform mobile development framework of choice. If you're not sure what to make of Xamarin, or feel you need a way of separating fact from fiction, hype from reality, look no further than this free white paper, Five Reasons Xamarin Will Change Mobile Development, that we've put together with our friends at Syncfusion. If you want the summary before you dig deeper into Xamarin, here's the quick pitch: deliver native applications with more speed and less cost. To learn more, and to find out how you can start to take advantage of Xamarin, simply head on over to Syncfusion to download the white paper and find out how to bring Xamarin into your current organizational toolkit. Visit Syncfusion to download the white paper now.
Read more
  • 0
  • 0
  • 16235

article-image-ios-9-speed
Samrat Shaw
20 May 2016
5 min read
Save for later

iOS 9: Up to Speed

Samrat Shaw
20 May 2016
5 min read
iOS 9 is the biggest iOS release to date. The new OS introduced new intricate features and refined existing ones. The biggest focus is on intelligence and proactivity, allowing iOS devices to learn user habits and act on that information. While it isn’t a groundbreaking change like iOS 7, there is a lot of new functionality for developers to learn. Along with iOS 9 and Xcode 7, Apple also announced major changes to the Swift language (Swift 2.0) and announced open source plans. In this post, I will discuss some of my favorite changes and additions in iOS 9. 1 List of new features Let’s examine the new features. 1.1 Search Extensibility Spotlight search in iOS now includes searching within third-party apps. This allows you to deep link from Search in iOS 9. You can allow users to supply relevant information that they can then navigate directly to. When a user clicks on any of the search results, the app will be opened and you can be redirected to the location where the search keyword is present. The new enhancements to the Search API include NSUserActivity APIs, Core Spotlight APIs, and web markup. 1.2 App Thinning App thinning optimizes the install sizes of apps to use the lowest amount of storage space while retaining critical functionality. Thus, users will only download those parts of the binary that are relevant to them. The app's resources are now split, so that if a user installs an app on iPhone 6, they do not download iPad code or other assets that are used to make an app universal. App thinning has three main aspects, namely app slicing, on-demand resources, and bitcode. Faster downloads and more space for other apps and content provide a better user experience. 1.3 3D Touch iPhone 6s and 6s Plus added a whole new dimension to UI interactions. A user can now press the Home screen icon to immediately access functionality provided by an app. Within the app, a user can now press views to see previews of additional content and gain accelerated access to features. 3D Touch works by detecting the amount of pressure that you are applying to your phone's screen in order to perform different actions. In addition to the UITouch APIs, Apple has also provided two new sets of classes, adding 3D Touch functionality to apps: UIPreviewAction and UIApplicationShortcutItem. This unlocks a whole new paradigm of iOS device interaction and will enable a new generation of innovation in upcoming iOS apps. 1.4 App Transport Security (ATS) With the introduction of App Transport Security, Apple is leading by example to improve the security of its operating system. Apple expects developers to adopt App Transport Security in their applications. With App Transport Security enabled, network requests are automatically made over HTTPS instead of HTTP. App Transport Security requires TLS 1.2 or higher. Developers also have an option to disable ATS, either selectively or as a whole, by specifying in the Info.plist of their applications. 1.5 UIStackView The newly introduced UIStackView is similar to Android’s LinearLayout. Developers embed views to the UIStackView (either horizontally or vertically), without the need to specify the auto layout constraints. The constraints are inserted by the UIKit at runtime, thus making it easier for developers. They have the option to specify the spacing between the subviews. It is important to note that UIStackViews don't scroll; they just act as containers that automatically fit their content. 1.6 SFSafariViewController With SFSafariViewController, developers can use nearly all of the benefits of viewing web content inside Safari without forcing users to leave an app. It saves developers a lot of time, since they no longer need to create their own custom browsing experiences. For the users too, it is more convenient, since they will have their passwords pre-filled, not have to leave the app, have their browsing history available, and more. The controller also comes with a built-in reader mode. 1.7 Multitasking for iPad Apple has introduced Slide Over, Split View, and Picture-in-Picture for iPad, thus allowing certain models to use the much larger screen space for more tasks. From the developer point of view, this can be supported by using the iOS AutoLayout and Size Classes. If the code base already uses these, then the app will automatically respond to the new multitasking setup. Starting from Xcode 7, each iOS app template will be preconfigured to support Slide Over and Split View. 1.8 The Contacts Framework Apple has introduced a brand new framework, Contacts. This replaces the function-based AddressBook framework. The Contacts framework provides an object-oriented approach to working with the user's contact information. It also provides an Objective-C API that works well with Swift too. This is a big improvement over the previous method of accessing a user’s contacts with the AddressBook framework. As you can see from this post, there are a lot of exciting new features and capabilities in iOS9 that developers can tap into, thus providing new and exciting apps for the millions of Apple users around the world. About the author Samrat Shaw is a graduate student (software engineering) at the National University Of Singapore and an iOS intern at Massive Infinity.
Read more
  • 0
  • 0
  • 16203
article-image-facebook-plans-to-use-bloomsbury-ai-to-fight-fake-news
Pravin Dhandre
30 Jul 2018
3 min read
Save for later

Facebook plans to use Bloomsbury AI to fight fake news

Pravin Dhandre
30 Jul 2018
3 min read
“Our investments in AI mean we can now remove more bad content quickly because we don't have to wait until after it's reported. It frees our reviewers to work on cases where human expertise is needed to understand the context or nuance of a situation. In Q1, for example, almost 90% of graphic violence content that we removed or added a warning label to was identified using AI. This shift from reactive to proactive detection is a big change -- and it will make Facebook safer for everyone.” Mark Zuckerberg, in Facebook’s earnings, call on Wednesday this week To understand the significance of the above statement, we must first look at the past. Last year, Social media giant Facebook suffered from multiple lawsuits across the UK, Germany, and US for defamation due to fake news articles and for spreading misleading information. To make amends, Facebook came up with fake news identification tools, however, failed to completely tame the effects of bogus news. In fact, the company’s revenue took a bad hit in advertising revenue along with its social reputation nosediving. Early this month, Facebook confirmed the acquisition of Bloomsbury AI, a London-based artificial intelligence start-up with over 60 patents acquired to date. Bloomsbury AI focuses on natural language processing - developing machine reading methods that can understand written text across a broad range of domains. The Artificial Intelligence team at Facebook would be on-boarding the complete team of Bloomsbury AI and will build highly robust methods to kill the plague of fake news throughout the Facebook platform. The rich expertise carried over by the Bloomsbury AI team will strengthen Facebook's endeavor in natural language processing research and gauge deeper understanding of natural language and its applications. It appears that the amalgamation will help Facebook to develop advanced machine reading, reasoning and question answering methods which will boost the Facebook’s NLP engine to understand the legitimacy of questions across a broad range of topics and make intellect choices thereby defeating the challenges of fake news and Autobots. No doubt, Facebook is going to leverage the Bloomsbury’s Cape service to answer a majority of the questions on unstructured text. The duo would play a significant role in parsing the content majorly to tackle fake photos and videos too. In addition, it has been said that the new team members would provide an active contribution to the ongoing artificial intelligence projects such as AI hardware chips, AI technology mimicking humans and many more. Facebook is investigating data analytics firm Crimson Hexagon over misuse of data Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project Did Facebook just have another security scare?
Read more
  • 0
  • 0
  • 16123

article-image-6-use-cases-machine-learning-healthcare
Sugandha Lahoti
10 Nov 2017
7 min read
Save for later

6 use cases of Machine Learning in Healthcare

Sugandha Lahoti
10 Nov 2017
7 min read
While hospitals have sophisticated processes and highly skilled administrators, management can still be an administrative nightmare for the already time-starved healthcare professionals. A sprinkle of automation can do wonders here. It could free up a practitioner's invaluable time and, thereby, allow them to focus on tending to critically ill patients and complex medical procedures. At the most basic level, machine learning can mechanize routine tasks such as automating documentation, billing, and regulatory processes. It can also provide ways and tools to diagnose and treat patients more efficiently. However, these tasks only scratch the surface. Machine learning is here to revolutionize healthcare and other allied industries such as pharma and medicine. Below are some ways it is being put to use in these domains. Helping with disease identification and drug discovery Healthcare systems generate copious amounts of data and use them for disease prediction. However, the necessary software to generate meaningful insights from this unstructured data is often not in place. Hence, drug and disease discovery end up taking time. Machine Learning algorithms can discover signatures of diseases at rapid rates by allowing systems to learn and make predictions based on the previously processed data. They can also be used to determine which chemical compounds could work together to aid drug discovery. Thus the time-consuming process of experimenting and testing millions of compounds is eliminated. With the fast discovery of diseases, the chances of detecting symptoms earlier and the probability of survival increases. It also boosts available treatment options. IBM has collaborated with Teva Pharmaceutical to discover new treatment options for respiratory and central nervous system diseases using Machine Learning algorithms such as predictive and visual analytics that run on IBM Watson Health Cloud. To gain more insights on how IBM Watson is changing the face of healthcare, check this article. Enabling precision medicine Precision Medicine revolves around healthcare practices specific to a particular patient. This includes analyzing a person’s genetic information, health history, environmental exposure, and needs and preferences to guide diagnosis for diseases and subsequent treatment. Here, machine learning algorithms are utilized to sift through vast databases of patient data to identify factors such as their genetic history and predisposition to diseases, that could strongly determine treatment success or failure. ML techniques in precision medicine exploit molecular and genomic data to assist doctors in directing therapies to patients and shed light on disease mechanisms and heterogeneity. It also predicts what diseases are likely to occur in the future and suggests methods to avoid them. Cellworks, a Life Sciences Technology company, brings together a SaaS-based platform for generating precision medicine products. Their platform analyses the genomic profile of the patient and then provides patient-specific reports for improved diagnosis and treatment. Assisting radiology and radiotherapy CTI and MRI scans for radiological diagnosis and interpretation are burdensome and laborious (not to mention, time-consuming). They involve segmentation—differentiating between healthy and infectious tissues—which when done manually has a good probability of resulting in errors and misdiagnosis. Machine Learning algorithms can speed up the segmentation process while also increasing accuracy in radiotherapy planning. ML can provide physicians information for better diagnostics which helps in obtaining accurate tumor location. It also predicts radiotherapy response to help create a personalized treatment plan. Apart from these, ML algorithms find use in medical image analysis as they learn from examples. This involves classification techniques which analyze images and available clinical information to generate the most likely diagnosis. Deep Learning can also be used for detecting lung cancer nodules in early screening CT scans and displaying the results in useful ways for clinical use. Google’s machine-learning division, DeepMind, is automating radiotherapy treatment for head and neck cancers using scans from almost 700 diagnosed patients. An ML algorithm scans the reports of symptomatic patients against these previous scans to help physicians develop a suitable treatment process. Arterys, a cloud-based platform, automates cardiac analysis using deep learning. Providing Neurocritical Care A large number of neurological diseases develop gradually or in stages, so the decay of the brain happens over time. Traditional approaches to neurological care such as peak activation, EEG epileptic spikes, Pronator drift etc., are not accurate enough to diagnose and classify neurological and psychiatric disorders. This is because they are typically used for end results assessment rather than for progressive analysis on how the brain disease develops. Moreover, timely personalized neurological treatments and diagnoses rely highly on the constant availability of an expert. Machine Learning algorithms can advance the science of detection and prediction by learning how the brain progressively develops into these conditions. Deep Learning techniques are applied in the area of neuroimaging to detect abstract and complex patterns from single-subject data to detect and diagnose brain disorders. Machine learning techniques such as SVM, RBFN, and RF are amalgamated with PDT (Pronator drift tests) to detect stroke symptoms based on quantification of proximal arm weakness using inertial sensors and signal processing. Machine Learning algorithms can also be used for detecting signs of dementia before its onset. The Douglas Mental Health University Institute uses PET scans to train ML algorithms to spot signs of dementia by analyzing it against scans of patients who have mild cognitive impairment. Then they run the scans belonging to symptomatic patients on the trained algorithm to predict the possibility of dementia. Predicting epidemic outbreaks Epidemic predictions traditionally rely on manual accounting. This includes self-reports or aggregation of information from healthcare services such as reports by different health protection agencies like CDC, NHIS, National Immunization Survey etc. However, they are time-consuming and error-prone. Thus predicting and prioritizing the outbreaks becomes challenging. ML algorithms can automatically perform analysis, improve calculations and verify information with minimal human intervention. Machine learning techniques like support vector machines and artificial neural networks can predict the epidemic potential of a disease and provide alerts for disease outbreak. They do this using data collected from satellites, and from real-time social media updates, historical information on the web, and other sources. They also use geospatial data such as temperature, weather conditions, wind speed, and other data points to predict the magnitude of impact an epidemic can cause in a particular area and to recommend necessary measures for preventing and containing them early on. AIME, a medical startup, has come up with an algorithm to predict outcome and even the epicenter of epidemics such as dengue fever before their occurrence. Better hospital management Machine Learning can bring about a change in traditional hospital management systems by envisioning hospitals as a digital patient-centric care center. These include automating routine tasks such as billing, admission and clearance, monitoring patients’ vitals etc. With administrative tasks out of the way, hospital authorities could fully focus on the care and treatment of patients. ML techniques such as computer vision can be used to feed all the vital signs of a patient directly into the EHR from the monitoring devices. Smart tracking devices are also used on patients to provide real-time whereabouts. Predictive analysis techniques provide continuous stream of real-time images and data. This analysis can sense risk and prioritize activities for the benefit of all patients. ML can also automate non-clinical functions, including pharmacy, laundry, and food delivery. The John Hopkins Hospital has its own command center that uses predictive analytics for efficient operational flow. Conclusion The digital health era focuses on health and wellness rather than diseases. The incorporation of machine learning in healthcare provides an improved patient experience, a better public health management, and reduces costs by automating manual labour. The next step in this amalgamation is a successful collaboration of clinicians and doctors with machines. This would bring about a futuristic health revolution with improved, precise, and more efficient care and treatment.
Read more
  • 0
  • 0
  • 16122

article-image-node-6-what-expect
Darwin Corn
16 Jun 2016
5 min read
Save for later

Node 6: What to Expect

Darwin Corn
16 Jun 2016
5 min read
I have a confession to make—I’m an Archer. I’ve taken my fair share of grief for this, with everyone from the network admin that got by on Kubuntu, to the C dev running Gentoo getting in on the fun. As I told the latter, Gentoo is for lazy people; lazy people that want to have their cake and eat it too with respect to the bleeding edge. Given all that, you’ll understand that when I was asked to write this post, my inner Archer balked at the thought of digging through source and documentation to distill this information into something palatable for your average back end developer. I crossed my fingers and wandered over to their GitHub, hoping for a clear roadmap, or at least a bunch of issues milestoned to 6.0.0. I was disappointed even though a lively discussion on dropping XP/Vista support briefly captured my attention. I’ve also been primarily doing front end work for the last few months. While I pride myself on being an IT guy who does web stuff every now and then (someone who cares more about the CI infrastructure than the product that ships), circumstances have dictated that I be in the ‘mockup’ phase for a couple volunteer gigs as well as my own projects, all at once. Convenient? Sure. Not exactly conducive to my generalist, RenaissanceMan self-image, though (as an aside, the annotations in Rap Genius read like a Joseph Ducreux meme generator). Back end framework and functionality has been relegated to the back of my mind. As such, I watched Node bring io.js back into the fold and go to Semver this fall with all the interest of a house cat. A cat that cares more about the mouse in front of him than the one behind the wall. By the time Node 6 drops in April, I’m going to be feasting on back end work. It would behoove me to understand the new tools I’ll be using. So I ignored the Archer on my left shoulder and, and listening to my editor on the right, dug around the source for a bit. Of note is that there’s nothing near as ground breaking as the 4.0.0 release, where Node adopted the ES6 support introduced in io.js 1.0.0. That’s not so much a slight at Node development as a reflection of the timeline of JavaScript development. That said, here’s some highlights from my foray into their issue tracker: Features Core support for Promises I’ve been playing around with Ember for so long that I was frankly surprised that Node didn’t include Core support for Promises. A quick look at the history of Node shows that they originally existed (based on EventEmitters), but that support was removed in 0.2 and ever since server-side promises have been the domain of various npm modules. The PR is still open, so it remains to be seen if the initial implementation of this makes it into 6.0.0. Changing FIPS crypto support In case you’re not familiar with the Federal Information Processing Standard, MDN has a great article explaining it. Node’s implementation left some functionality to be desired, namely that Node compiled with FIPS support couldn’t use nonFIPS hashing algorithms (md5 for one, breaking npm). Node 6 compiled with FIPS support will likely both fix that and flip it’s functionality, such that, by default, FIPS support will be disabled (in Node compiled with it), requiring invocation in order to be used. Replacing C http-parser and DNS resolvers with JS implementations ThesePRs are still open (and have been for almost a year) so I’d say seeing them come in 6.0.0 is unlikely, though the latest discussion on the latter seems centered around CI so I might be wrong on it not making 6.0.0. That being said, while not fundamentally changing functionality too much, it will be cool to see more of the codebase being written in JavaScript. Deprecations fs reevaluation This primarily affects pre-v4 graceful-fs, on which many modules, including major ones such as npm itself and less, are dependent. The problem lies not insofar as these modules are dependent on graceful-fs, as they don’t require v4 or newer. This could lead to breakage for a lot of Node apps whose developers aren’t on their game when it comes to keeping the dependencies up to date. As a set and forget kind of guy, I’m thankful this is just getting deprecated and not outright removed. Removals sys The sys module was deprecated in Node 4 and has always been deprecated in io.js. There was some discussion on its removal last summer, and it was milestoned to 6.0.0. It’s still deprecated as of Node 5.7.0, so we’ll likely have to wait until release to see if it’s actually removed. Nothing groundbreaking, but there is some maturation of the platform in both codebase and deprecations as well as outright removals. If you live on the bleeding edge like I do, you’ll hop on over to Node 6 as soon as it drops in April, but there’s little incentive to make the leap for those taking a more conservative approach to Node development. About the Author Darwin Corn is a Systems Analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the Information Technology world.
Read more
  • 0
  • 0
  • 16114
article-image-what-security-and-systems-specialists-are-planning-to-learn-in-2018
Savia Lobo
22 Jun 2018
3 min read
Save for later

What security and systems specialists are planning to learn in 2018

Savia Lobo
22 Jun 2018
3 min read
Developers are always on the verge of learning something new, which can add on to their skill and their experience. Organizations such as Red Hat, Microsoft, Oracle, and many more roll out certain courses and certifications for developers and other individuals. 2018 has brought in some exciting areas for security and system experts to explore. Our annual Skill Up survey highlighted few of the technologies that security and system specialists are planning to learn in this year. Docker emerged to be at the top with professionals wanting to learn more about it and its implementations in building up a software with the ‘everything at one place’ concept. The survey also highlighted specialists being interested in learning RedHat’s OpenStack, Microsoft Azure, and AWS technologies. OpenStack being a cloud OS keeps a check on large pools of compute, storage, and networking resources within any datacenter, all through a web interface. It provides users with a much modular architecture to build their own cloud platforms without restrictions faced in the traditional cloud infrastructure. OpenStack also offers a Red Hat® Certified System Administrator course using which one can secure private clouds on OpenStack. You can check out our book on OpenStack Essentials to get started. The survey also highlights that system specialists are interested in learning Microsoft Azure. The primary reason for their choice is it offers a varied range of options to protect one’s applications and the data. It offers a seamless experience for developers who want to build, deploy, and maintain applications on the cloud. It also supports compliance efforts and provides a cost-effective security for individuals and organizations. AWS also offers out-of-the-box features with its products such as Amazon EC2, Amazon S3, AWS Lambda, and many more. Read about why AWS is a preferred cloud provider in our article, Why AWS is the preferred cloud platform for developers working with big data? In response to another question in the same survey, developers expressed their interest in learning security. With a lot of information being hosted over the web, organizations fear that their valuable data might be attacked by hackers and can be used illegally. Read also: The 10 most common types of DoS attacks you need to know Top 5 penetration testing tools for ethical hackers Developers are also keen on learning about security automation that can aid them in performing vulnerability scans without any human errors and also decreases their time to resolution. Security automation further optimizes ROI of their security investments. Learn security automation using one of the popular tools Ansible with our book, Security Automation with Ansible 2. So here are some of the technologies that security and system specialists are planning to learn. This analysis was taken from Packt Skill Up Survey 2018. Do let us know your thoughts in the comments below. The entire survey report can be found on the Packt store. IoT Forensics: Security in an always connected world where things talk Top 5 cybersecurity assessment tools for networking professionals Pentest tool in focus: Metasploit
Read more
  • 0
  • 0
  • 16077

article-image-pets-cattle-analogy-demonstrates-how-serverless-fits-software-infrastructure-landscape
Russ McKendrick
20 Feb 2018
8 min read
Save for later

The pets and cattle analogy demonstrates how serverless fits into the software infrastructure landscape

Russ McKendrick
20 Feb 2018
8 min read
When you say serverless to someone, the first conclusion they jump to is that you are running your code without any servers. This can be quite a valid conclusion if you are using a public cloud service like AWS, but when it comes to running in your own environment, you can't avoid having to run on a server of some sort. This blog post is an extract from Kubernetes for Serverless Applications by Russ McKendrick. Before we discuss what we mean by serverless and Functions as a Service, we should discuss how we got here. As people who work with me will no doubt tell you, I like to use the pets versus cattle analogy a lot as this is quite an easy way to explain the differences in modern cloud infrastructures versus a more traditional approach. The pets, cattle, chickens insects, and snowflakes analogy I first came across the pets versus cattle analogy back in 2012 from a slide deck published by Randy Bias. The slide deck was used during a talk Randy Bias gave at the cloudscaling conference on architectures for open and scalable clouds. Towards the end of the talk, he introduced the concept of pets versus cattle, which Randy attributes to Bill Baker who at the time was an engineer at Microsoft. The slide deck primarily talks about scaling out and not up; let's go into this in a little more detail and discuss some of the additions that have been made since the presentation was first given five years ago. Pets: the bare metal servers and virtual machines Pets are typically what we, as system administrators, spend our time looking after. They are traditional bare metal servers or virtual machines: We name each server as you would a pet. For example, app-server01.domain.com and database-server01.domain.com. When our pets are ill, you will take them to the vets. This is much like you, as a system administrator, would reboot a server, check logs, and replace the faulty components of a server to ensure that it is running healthily. You pay close attention to your pets for years, much like a server. You monitor for issues, patch them, back them up, and ensure they are fully documented. There is nothing much wrong with running pets. However, you will find that the majority of your time is spent caring for them—this may be alright if you have a few dozen servers, but it does start to become unmanageable if you have a few hundred servers. Cattle: the sort of instances you run on public clouds Cattle are more representative of the instance types you should be running in public clouds such as Amazon Web Services (AWS) or Microsoft Azure, where you have auto scaling enabled. You have so many cattle in your herd you don't name them; instead they are given numbers and tagged so you can track them. In your instance cluster, you can also have too many to name so, like cattle, you give them numbers and tag them. For example, an instance could be called ip123067099123.domain.com and tagged as app-server. When a member of your herd gets sick, you shoot it, and if your herd requires it you replace it. In much the same way, if an instance in your cluster starts to have issues it is automatically terminated and replaced with a replica. You do not expect the cattle in your herd to live as long as a pet typically would, likewise you do not expect your instances to have an uptime measured in years. Your herd lives in a field and you watch it from afar, much like you don't monitor individual instances within your cluster; instead, you monitor the overall health of your cluster. If your cluster requires additional resources, you launch more instances and when you no longer require a resource, the instances are automatically terminated, returning you to your desired state. Chickens: an analogy for containers In 2015,  Bernard Golden added to the pets versus cattle analogy by introducing chickens to the mix in a blog post titled Cloud Computing: Pets, Cattle and Chickens? Bernard suggested that chickens were a good term for describing containers alongside pets and cattle: Chickens are more efficient than cattle; you can fit a lot more of them into the same space your herd would use. In the same way, you can fit a lot more containers into your cluster as you can launch multiple containers per instance. Each chicken requires fewer resources than a member of your herd when it comes to feeding. Likewise, containers are less resource-intensive than instances, they take seconds to launch, and can be configured to consume less CPU and RAM. Chickens have a much lower life expectancy than members of your herd. While cluster instances can have an uptime of a few hours to a few days, it is more than possible that a container will have a lifespan of minutes. Insects: An analogy for serverless Keeping in line with the animal theme, Eric Johnson wrote a blog post for RackSpace which introduced insects. This term was introduced to describe serverless and Functions as a Service. Insects have a much lower life expectancy than chickens; in fact, some insects only have a lifespan of a few hours. This fits in with serverless and Functions as a Service as these have a lifespan of seconds. Snowflakes Around the time Randy Bias gave his talk which mentioned pets versus cattle, Martin Fowler wrote a blog post titled SnowflakeServer. The post described every system administrator's worst nightmare: Every snowflake is unique and impossible to reproduce. Just like that one server in the office that was built and not documented by that one guy who left several years ago. Snowflakes are delicate. Again, just like that one server—you dread it when you have to log in to it to diagnose a problem and you would never dream of rebooting it as it may never come back up. Bringing the pets, cattle, chickens, insects and snowflakes analogy together... When I explain the analogy to people, I usually sum up by saying something like this: Organizations who have pets are slowly moving their infrastructure to be more like cattle. Those who are already running their infrastructure as cattle are moving towards chickens to get the most out of their resources. Those running chickens are going to be looking at how much work is involved in moving their application to run as insects by completely decoupling their application into individually executable components. But the most important take away is this:  No one wants to or should be running snowflakes. Serverless and insects As already mentioned, using the word serverless gives the impression that servers will not be needed. Serverless is a term used to describe an execution model. When executing this model you, as the end user, do not need to worry about which server your code is executed on as all of the decisions on placement, server management, and capacity are abstracted away from you—it does not mean that you literally do not need any servers. Now there are some public cloud offerings which abstract so much of the management of servers away from the end user that it is possible to write an application which does not rely on any user-deployed services and that the cloud provider will manage the compute resources needed to execute your code. Typically these services, which we will look at in the next section, are billed for the resources used to execute your code in per second increments. So how does that explanation fits in with the insect analogy? Let's say I have a website that allows users to upload photos. As soon as the photos are uploaded they are cropped, creating several different sizes which will be used to display as thumbnails and mobile-optimized versions on the site. In the pets and cattle world, this would be handled by a server which is powered on 24/7 waiting for users to upload images. Now this server probably is not just performing this one function; however, there is a risk that if several users all decide to upload a dozen photos each, then this will cause load issues on the server where this function is being executed. We could take the chickens approach, which has several containers running across several hosts to distribute the load. However, these containers would more than likely be running 24/7 as well; they will be watching for uploads to process. This approach could allow us to horizontally scale the number of containers out to deal with an influx of requests. Using the insects approach, we would not have any services running at all. Instead, the function should be triggered by the upload process. Once triggered, the function will run, save the processed images, and then terminate. As the developer, you should not have to care how the service was called or where the service was executed, so long as you have your processed images at the end of it.
Read more
  • 0
  • 0
  • 15956
Modal Close icon
Modal Close icon