Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-ai-cold-war-between-china-and-the-usa
Neil Aitken
28 Jun 2018
6 min read
Save for later

The New AI Cold War Between China and the USA

Neil Aitken
28 Jun 2018
6 min read
The Cold War between the United States and Russia ended in 1991. However, considering the ‘behind the scenes’ behavior of the world’s two current Super Powers – China and the USA, another might just be beginning. This time around, many believe that the real battle doesn’t relate to the trade deficit between the two countries, despite new stories detailing the escalation of trade tariffs. In the next decade and a half, the real battle will take place between China and the USA in the technology arena, specifically, in the area of Artificial Intelligence or AI. China’s not shy about it’s AI ambitions China has made clear its goals when it comes to AI. It has publicly announced its plan to be the world leader in Artificial Intelligence by 2030. The country has learned a hard lesson, missing out on previous tech booms, notably, in the race for internet supremacy early this century. Now, they are taking a far more proactive stance. The AI market is estimated to be worth $150 billion per year by 2030, slightly over a decade from now, and China has made very clear public statements that the country wants it all. The US, in contrast has a number of private companies striving to carve out a leadership position in AI but no holistic policy. Quite the contrary, in fact. Trumps government say, “There is no need for an AI moonshot, and that minimizing government interference is the best way to make sure the technology flourishes.” What makes China so dangerous as an AI Threat ? China’s background and current circumstance gives them a set of valuable strategic advantages when it comes to AI. AI solutions are based, primarily, on two things. First, of critical importance is the amount of data available to ‘train’ an AI algorithm and the relative ease or difficulty of obtaining access to it. Secondly, the algorithm which sorts the data, looking for patterns and insights, derived from research, which are used to optimize the AI tools which interpret it. China leads the world on both fronts. China has more data: China’s population is 4 times larger than the US’s giving them a massive data advantage. China has a total of 730 million daily internet users and 704 million smartphone mobile internet users. Each of the connected individuals uses their phone, laptop or tablet online each day. Those digital interactions leave logs of location, time, action performed and many other variables. In sum then, China’s huge population is constantly generating valuable data which can be mined for value. Chinese regulations give public and private agencies easier access to this data: Few countries have exemplary records when it comes to human rights. Both Australia, and the US, for example, have been rebuked by the UN for their treatment of immigration in recent years. Questions have been asked of China too. Some suggest that China’s centralized government, and alleged somewhat shady history when it comes to human rights means they can provide internet companies with more data, more easily, than their private equivalents in the US could dream of. Chinese cybersecurity laws require companies doing business in the country to store their data locally. The government has placed one state representative on the board of each of their major tech companies, giving them direct, unfettered central government influence in the strategic direction and intent of those companies, especially when it comes to coordinating the distribution of the data they obtain. In the US, data leakage is one of the most prominent news stories of 2018. Given Facebook’s presentation to congress around the Facebook/Cambridge Analytica data sharing scandal, it would be hard to claim that US companies have access to data outside each company competing to evolve AI solutions fastest. It’s more secretive: China protects its advantage by limiting other countries’ access to its findings / information related to AI. At the same time, China takes advantage of the open publication of cutting edge ideas generated by scientists in other areas of the world. How China is doubling down on their natural advantage in AI solution development A number of metrics show China’s growing advantage in the area. China is investing more money in the area and leading the world in the number of university led research papers on AI that they’re publishing. China is investing more money in AI than the USA. They overtook the US in AI funds allocation in 2015 and have been increasing investment in the area since. Source: Wall Street Journal China now performs more research in to AI than the US – as measured by the number of published scientific peer reviewed journals. Source: HBR Why ‘Network Effects’ will decide the ultimate winner in the AI Arms Race You won’t see evidence of a Cold War in the behaviors of World Leaders. The handshakes are firm and the visits are cordial. Everybody smiles when they meet at the G8. However, a look behind the curtain clearly shows a 21st Century arms race underway, being led by investments  related to AI in both countries. Network effects ensure that there is often only one winner in a fight for technological supremacy. Whoever has the ‘best product’ for a given application, wins the most users. The data obtained from those users’ interactions with the tool is used to hone its performance. Thus creating a virtuous circle. The result is evident in almost every sphere of tech: Network effects explain why most people use only Google, why there’s only one Facebook and how Netflix has overtaken cable TV in the US as the primary source of video entertainment. Ultimately, there is likely to be only one winner in the war surrounding AI, too. From a military perspective, the advantage China has in its starting point for AI solution development could be the deciding factor. As we’ve seen, China has more people, with more devices, generating more data. That is likely to help the country develop workable AI solutions faster. They ingest the hard won advantages that US data scientists develop and share – but do not share their own. Finally, they simply outspend and out-research the US, investing more in AI than any other country. China’s coordinated approach outpaces the US’s market based solution with every step. The country with the best AI solutions for each application will gain a ‘Winner Takes All’ advantage and the winning hand in the $300 billion game of AI market ownership. We must change how we think about AI, urge AI founding fathers Does AI deserve to be so Overhyped? Alarming ways governments are using surveillance tech to watch you    
Read more
  • 0
  • 0
  • 19768

article-image-shoshana-zuboff-on-21st-century-solutions-for-tackling-the-unique-complexities-of-surveillance-capitalism
Savia Lobo
05 Jun 2019
4 min read
Save for later

Shoshana Zuboff on 21st century solutions for tackling the unique complexities of surveillance capitalism

Savia Lobo
05 Jun 2019
4 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Shoshana Zuboff’s take on how to tackle the complexities of surveillance capitalism. She has also provided 21st-century solutions to help tackle the same. Shoshana Zuboff, Author of 'The Age of Surveillance Capitalism', talks about economic imperatives within surveillance capitalism. Zuboff says that the unilateral claiming of private human experience, its translation into behavioral data. These predictions are sold in a new kind of marketplace that trades exclusively in human futures. When we deconstruct the competitive dynamics of these markets we get to understand what the new imperatives are, which are, Scale: as they need a lot of data in order to make good predictions economies of scale; secondly, scope: they need a variety of data to make good predictions. She shared a brief quote from a data scientist, which says, “We can engineer the context around a particular behavior and force change. That way we are learning how to rate the music and then we let the music make them dance.” This behavioral modification is systemically institutionalized on a global scale and mediated by a now ubiquitous digital infrastructure. She further explains the kind of law and regulation needed today will be 21st century solutions aimed at the unique 21st century complexities of surveillance capitalism. She mentioned three arenas in which legislative and regulatory strategies can effectively align with the structure and consequences of surveillance capitalism briefly: We need lawmakers to devise strategies that interrupt and in many cases outlaw surveillance capitalism's foundational mechanisms. This includes the unilateral taking of private human experience as a free source of raw material and its translation into data. It includes the extreme information asymmetries necessary for predicting human behavior. It includes the manufacture of computational prediction products based on the unilateral and secret capture of human experience. It includes the operation of prediction markets that trade in human futures. From the point of view of supply and demand, surveillance capitalism can be understood as a market failure. Every piece of research over the last decades has shown that when users are informed of the backstage operations of surveillance capitalism they want no part of it, they want protection, they reject it, they want alternatives. We need laws and regulatory frameworks designed to advantage companies that want to break with the surveillance capitalist paradigm. Forging an alternative trajectory to the digital future will require alliances of new competitors who can summon and institutionalize an alternative ecosystem. True competitors that align themselves with the actual needs of people and the norms of market democracy are likely to attract just about every person on earth as their customers. Lawmakers will need to support new forms of citizen action, collective action just as nearly a century ago workers won legal protection for their rights to organize to bargain and to and to strike. New forms of citizen solidarity are already emerging in municipalities that seek an alternative to the Google-owned Smart City future. In communities that want to resist the social cost of so-called disruption imposed for the sake of others gained and among workers who seek fair wages and reasonable security in the precarious conditions of the so-called gig economy. She says, “Citizens need your help but you need citizens because ultimately they will be the wind behind your wings, they will be the sea change in public opinion and public awareness that supports your political initiatives.” “If together we aim to shift the trajectory of the digital future back toward its emancipatory promise, we resurrect the possibility that the future can be a place that all of us might call home,” she concludes. To know more you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and iPad and more Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech Apple previews iOS 13: Sign in with Apple, dark mode, advanced photo and camera features, and an all-new Maps experience  
Read more
  • 0
  • 0
  • 19764

article-image-fosdem-2019-designing-better-cryptographic-mechanisms-to-avoid-pitfalls-talk-by-maximilian-blochberger
Prasad Ramesh
13 Feb 2019
3 min read
Save for later

FOSDEM 2019: Designing better cryptographic mechanisms to avoid pitfalls - Talk by Maximilian Blochberger

Prasad Ramesh
13 Feb 2019
3 min read
At FOSDEM 2019, Belgium, Maximilian Blochberger talked about preventing cryptographic pitfalls by avoiding mistakes while integrating cryptographic mechanisms correctly. Blochberger is a research associate at the University of Hamburg. FOSDEM is a free and open event for software developers with thousands of attendees, this year’s event took place on second and third February. The goal of this talk is to raise awareness of cryptographic misuse. Preventing pitfalls in cryptography is not about cryptographic protocols but about designing better APIs. Consider a scenario where a developer that values privacy intends to add encryption. This is about integrating cryptographic mechanisms into your application. Blochberger uses a mobile application as an example but the principles are no specific to mobile applications. A simple task is presented—to encrypt a string which is actually difficult. A software developer who doesn't have any cryptographic or even security background would search it online. They will then copy paste a common answer snippet available on StackOverflow. Even though it had warnings of not being secure, but had upvotes and probably worked for some people. Readily available code like that has words like “AES” or “DES” and the software developer may not know much about those encryption algorithms. Using the default algorithms listed in such template code, and using the same keys is not secure. Also, the encryption itself is not CPA (chosen-plaintext attack) secure, the key derivation can be unauthenticated, among other things. 98% of security-related snippets are insecure according to many papers. It’s hard to get encryption right. The vulnerability is high especially if the code is copied from the internet. Implementing cryptographic mechanisms should be done by cryptographic engineers who have expertise in the field. The software developer does not need to develop or even know about the details of the implementation. Doing compiler checks instead of runtime checks is better since you don’t have to wait for something to go wrong before identifying the problem. Cryptography is harder than it actually looks. Many things can and do go wrong exposing encrypted data due to incorrect choices or inadequate measures. He demonstrates an iOS and macOS example using Tafelsalz. For more details with the demonstration of code, you can watch the video. Introducing CT-Wasm, a type-driven extension to WebAssembly for secure, in-browser cryptography Sennheiser opens up about its major blunder that let hackers easily carry out man-in-the-middle attacks Tink 1.2.0: Google’s new multi-language, cross platform, cryptographic library to secure data
Read more
  • 0
  • 0
  • 19754

article-image-virtual-machines-vs-containers
Amit Kothari
17 Oct 2017
5 min read
Save for later

Virtual machines vs Containers

Amit Kothari
17 Oct 2017
5 min read
Virtual machines and containers are pretty similar, but they do possess some important differences. These differences will dictate which ones you decide to use. So, when you ask a question like 'virtual machines vs containers' there isn't necessarily going to be an outright winner - but there might be a winner for you in a given scenario. Let's take a look at what a virtual machine is, exactly, what a container is, and how they compare - as well as the key differences between the two. What is a virtual machine? Virtual machines are a product of hardware virtualization. They sit on top of physical machines with the hypervisor or virtual machine manager in between, acting as a layer of abstraction between the virtual machine and the underlying hardware. A virtualized physical machine can host multiple virtual machines, enabling better hardware utilization. Since the hypervisor abstracts the physical machine's hardware, it allows virtual machines to use a different operating system on the same host machine. The host operating system and virtual machine operating system run their own kernel. All the communication between the virtual machines and the host machine occurs through the hypervisor, resulting in high level of isolation. This means if one virtual machine crashes, it would not affect other virtual machines running on the same physical machine. Although the hypervisor's abstraction layer offers a high level of isolation, it also affects the performance. This problem can be solved by using a different virtualization technique. What is a container? Containers use lightweight operating system level virtualization. Similar to virtual machines, multiple containers can run on the same host machine. However, containers do not have their own kernel. They share the host machine's kernel, making them much smaller in size compared to virtual machines. They use process level isolation, allowing processes inside a container to be isolated from other containers. The difference between virtual machines and containers In his post Containers are not VMs, Mike Coleman use the analogy of houses and apartment buildings to compare virtual machines and containers. Self-contained houses have their own infrastructure while apartments are built around shared infrastructure. Similarly, virtual machines have their own operating system, with kernel, binaries, libraries etc. While containers share the host operating system kernel with other containers. Due to this, containers are much smaller in size allowing a physical machine to host more containers than virtual machines. Since containers use lightweight operating system level virtualization instead of a hypervisor, they are less resource intensive compared to virtual machines and offer better performance. Compared to virtual machines, containers are faster, quicker to provision, and easy to scale. As spinning a new container is quick and easy when a patch or an update is required, it is easy to start a new container and stop the old one instead of updating a running container. This allows us to build immutable infrastructure, which is reliable, portable and easy to scale. All of this makes containers a preferred choice for application deployment, especially with the teams that are using micro-services or similar architecture, where an application is composed of multiple small services instead of a monolith. In microservice architecture, an application is built as a suite of independent, self-contained services. This allows the teams to work independent of each other and deliver features quicker. However, decomposing applications into multiple parts adds operational complexity and overhead. Containers solve this problem. Containers can serve as a building block in the microservice world where each service can be packaged and deployed as a container. A container will have everything that is required to run a service, this includes service code, its dependencies, configuration files, libraries etc. Packaging a service and all its dependencies as a container makes it easy to distribute and deploy a service. Since the container includes everything that is required to run a service, it can be deployed reliably in different environments. A service packaged as a container will run the same way locally on a developer's machine, in a test environment, and in production. However, there are things to consider when using containers. Containers share the kernel and other components of the host operating system. This makes them less isolated compared to virtual machines, and thus less secure. Since each virtual machine has its own kernel, we can run virtual machines with a different operating system on the same physical machine. However since containers share the host operating system kernel, only the guest operating system that can work with the host operating system can be installed in a container. Virtual machines vs containers - in conclusion... Compared to virtual machines, containers are lightweight, performant and easy to provision. While containers seem to be the obvious choice to build and deploy applications, virtual machines have their own advantages. Compared to physical machines, virtual machines have the better tooling and are easier to automate. Virtual machines and containers can co-exist. Organizations with existing infrastructure built around virtual machines can take the benefits of containers by deploying them on virtual machines.
Read more
  • 0
  • 0
  • 19655

article-image-progressive-web-amps-combining-progressive-web-apps-and-amp
Sugandha Lahoti
14 Jun 2018
8 min read
Save for later

Progressive Web AMPs: Combining Progressive Wep Apps and AMP

Sugandha Lahoti
14 Jun 2018
8 min read
Modern day web development is getting harder. Users are looking for relentless, responsive and reliable browsing. They want faster results and richer experiences. In addition to this, Modern apps need to be designed so as to support a large number of ecosystems from mobile web, desktop web, Native ioS, Native Android, Instant articles etc. Every new technology which launches has its own USP. The need for today is combining the features of the various popular mobile tech in the market and reaping their benefits as a combination. Acknowledging the standalones In a study by google it was found that “53% of mobile site visits are abandoned if pages take longer than 3 seconds to load.” This calls for making page loads faster and effortless. A cure for this illness is in the form of AMP or Accelerated Mobile Pages, the brainwork of Google and Twitter. They are blazingly fast web pages purely meant for readability and speed. Essentially they are HTML, most of CSS, but no JavaScript. So heavy duty things such as images are not loaded until they are scrolled into view. In AMPs, links are pre-rendered before you click on them. This is made possible using the AMP caching infrastructure. It automatically caches and calls on the content to be displayed atop the AMP and that is why it feels instant. Because the developers almost never write JavaScript, it leads to a cheap, yet fairly interactive deployment model. However, AMPs are useful for a narrow range of content. They have limited functionality. Users, on the other hand, are also looking for reliability and engagement. This called for the development of what is known as Progressive web apps. Proposed by Google in 2015, PWAs combine the best of mobile and web applications to offer users an enriching experience. Think of Progressive web apps as a website that acts and feels like a complete app. Once the user starts exploring the app within the browser, it progressively becomes smarter, faster and makes user experience richer.  Application Shell Architecture and Service Workers are two core drivers that enable PWA to offer speed and functionality. Key benefits that PWA offers over traditional mobile sites include push notifications, highly responsive UI, all types of hardware access which includes access to camera & microphones, and low data usage to name a few. The concoction: PWA + AMP AMPs are fast and easy to deploy. PWAs are engaging and reliable. AMPs are effortless, more retentive and instant. PWAs supports dynamic content, provides push notifications and web manifests. AMPs work on user acquisition. PWAs enhance user experiences. They seemingly work perfectly well on different levels. But users want to Start quick and Stay quick. They want the content they view to make the first hop blazingly fast, but then provide richer pages by amazing reliability and engagement. This called for combining the features of both into one and this was how Progressive web AMPs was born. PWAMP, as the developers call it, combines the capabilities of native app ecosystem with the reach of the mobile Web. Let us look at how exactly it functions and does the needful. The Best of Both Worlds: Reaping benefits of both AMPs fall back when you have dynamic content. Lack of JavaScript means dynamic functionality such as Payments, or push notifications are unavailable. PWA, on the other hand, can never be as fast as an AMP on the first click. Progressive Web AMPs combines the best features of both by making the first click super fast and then rendering subsequent PWA pages/content. So AMP opens a webpage in the blink of an eye with zero time lag and then the subsequent swift transition to PWA leads to beautiful results with dynamic functionalities. So it starts fast and builds up as users browse further. Now, this merger is made possible using three different ways. AMP as PWA: AMP pages in combination with PWA features This involves enabling PWA features in AMP pages. The user clicks on the link, it boots up fast and you see an AMP page which loads from the AMP cache. On clicking subsequent links, the user moves away from AMP cache to the site’s domain(origin). The website continues using the AMP library, but because it is supported on origin now, service workers become active, making it possible to prompt users (by web manifests) to install a PWA version of their website for a progressive experience. AMP to PWA: AMP pages utilized for a smooth transition to PWA features In PWAs the service workers and app shells kick in only after the second step. Hence AMPs can be a perfect entry point for your apps whereas the user discovers content at fast rates with AMP pages, the service worker of the PWA installs in the background and the user is instantly upgraded to PWA in subsequent clicks which can add push notifications, add reminders, web manifests etc. So basically the next click is also going to be instant. AMP in PWA: AMP as a data source for PWA AMPs are easy and safe to embed. As they are self-contained units, they are easily embeddable in websites. Hence they can be utilized as a data source for PWAs.  AMPs make use of Shadow AMP, which can be introduced in your PWA. This AMP library, loads in the top level page. It can amplify the portions in the content as decided by the developer and connect to a whole load of documents for rendering them out. As the AMP library is compiled and loaded only once for, the entire PWA, it would, in turn, reduce backend implementations and client complexity. How are they used in the real world scenario: Shopping PWAMP offers a high engagement feature to the shoppers. Because AMP sites are automatically kept at the top by Google search engines, AMP attracts the customers to your sites by the faster discovery of the apps. The PWA keeps them thereby allowing a rich, immersive, and app-like shopping experience that keeps the shoppers engaged. Lancôme, the L’Oréal Paris cosmetics brand is soon combining AMP with their existing PWA. Their PWA had led to a 17% year over year increase in the mobile sales. With the addition of AMP, they aim to build lightweight mobile pages that load as fast as possible on smartphones to make the site faster and more engaging. Travel PWAMP features allow users to browse through a list of hotels which instantly loads up at the first click. The customer may then book a hotel of his choice in the subsequent click which upgrades him to the PWA experience. Wego, is a Singapore-based travel service. Its PWAMP has achieved a load time for new users at 1.6 seconds and 1 second for returning customers. This has helped to increase site visits by 26%, reduce bounce rates by 20% and increase conversions by 95%, since its launch. News and Media Progressive Web AMPs are also highly useful in the news apps. As the user engages with content using AMP, PWA downloads in the background creating frictionless, uninterrupted reading. Washington Post has come up with one such app where users can experience the Progressive Web App when reading an AMP article and clicking through to the PWA link when it appears in the menu. In addition, their PWA icon can be added to a user’s home screen through the phone’s browser. All the above examples showcase how the concoction proves to always be fast no matter what. Progressive Web AMPs are progressively enhanced with just one backend-the AMP to rule them all meaning that deploy targets are reduced considerably. So all ecosystems namely web, Android, and iOS are supported with just thin layers of extra code. Thus making them highly beneficial in cases of constrained engineering resources or reduced infrastructure complexity. In addition to this, Progressive Web AMPs are highly useful when a site has a lot of static content on individual pages, such as travel, media, news etc. All these statements assert the fact that PWAMP has the power to provide a full mobile web experience with an artful and strategic combination of the AMP and PWA technologies. To know more about how to build your own Progressive Web AMPs, you can visit the official developer’s website. Top frameworks for building your Progressive Web Apps (PWA) 5 reasons why your next app should be a PWA (progressive web app) Build powerful progressive web apps with Firebase
Read more
  • 0
  • 0
  • 19653

article-image-neurips-invited-talk-reproducible-reusable-and-robust-reinforcement-learning
Prasad Ramesh
25 Feb 2019
6 min read
Save for later

NeurIPS Invited Talk: Reproducible, Reusable, and Robust Reinforcement Learning

Prasad Ramesh
25 Feb 2019
6 min read
On the second day of NeurIPS conference held in Montreal, Canada last year, Dr. Joelle Pineau presented a talk on reproducibility in reinforcement learning. She is an Associate Professor at McGill University and Research Scientist for Facebook, Montreal, and the talk is ‘Reproducible, Reusable, and Robust Reinforcement Learning’. Reproducibility and crisis Dr. Pineau starts by stating a quote from Bollen et. al in National Science Foundation: “Reproducibility refers to the ability of a researcher to duplicate the results of a prior study, using the same materials as were used by the original investigator. Reproducibility is a minimum necessary condition for a finding to be believable and informative.” Reproducibility is not a new concept and has appeared across various fields. In a 2016 The Nature journal survey of 1576 scientists, 52% said that there is a significant reproducibility crisis, 38% agreed to a slight crisis. Reinforcement learning is a very general framework for decision making. About 20,000 papers are published in this area alone in 2018 and the year is not even over yet, compared to just about 2,000 papers in the year 2000. The focus of the talk is a class of reinforcement learning that has gotten the most attention and has shown a lot of promise for practical applications—policy gradients. In this method, the idea is that the policy/strategy is learned as a function and this function can be represented by a neural network. Pineau picks four research papers in the class of policy gradients that come across literature most often. They use the Mujocu simulator to compare the four algorithms. It is not important to know which algorithm is which but the approach to empirically compare these algorithms is the intention. The results were different in different environments (Hopper, Swimmer) but the variance was also drastically different for an algorithm. Even on using different code and policies the results were very different for a given algorithm in different environments. It was observed that people writing papers may not be always motivated to find the best possible hyperparameters and very often use the default hyperparameters. On using the best hyperparameters possible for two algorithms compared fairly, the results were pretty clean, distinguishable. Where n=5, five different random seeds. Picking n influences the size of the confidence interval (CI). n=5 here as most papers used 5 trials at the most. Some people were also run “n” runs where n was not specified and would report the top 5 results. It is a good way to show good results but there’s a strong positive bias, the variance appears to be small. Source: NeurIPS website Some people argue that the field of reinforcement learning is broken. Pineau stresses that this is not her message and notes that sometimes fair comparisons don’t have to give the cleanest results. Different methods may have a very distinct set of hyperparameters in number, value, and variable sensitivity. Most importantly the best method to choose heavily depends on the data and computation budget you can spare. An important point to get the said reproducibility when using algorithms to your problem. Pineau and her team surveyed 50 RL papers from 2018 and found that significance testing was applied only on 5% of the papers. Graphs and shading is seen in many papers but without information on what the shading area is, confidence interval or standard deviation cannot be known. Pineau says: “Shading is good but shading is not knowledge unless you define it properly.” A reproducibility checklist For people publishing papers Pineau presents a checklist created in consultation with her colleagues. It says for algorithms the things included should be a clear description, an analysis of complexity, and a link to source code and dependencies. For theoretical claims, a statement of the result, a clear explanation of any assumptions, and a complete proof of the claim should be included. There are also other items presented in the checklist for figures and tables. Here is the complete checklist: Source: NeurIPS website Role of infrastructure on reproducibility People can think that since the experiments are run on computers results will be more predictable than those of other sciences. But even in hardware, there is room for variability. Hence, specifying it can be useful. For example the properties of CUDA operations. On some myths “Reinforcement Learning is the only case of ML where it is acceptable to test on your training set.” Do you have to train and test on the same task? Pineau says that you really don’t have to after presenting three examples. The first one is where the agent moves around in four directions on an image then identifies what the image is, on higher n, the variance is greatly reduced. The second one is of an Atari game where the black background is replaced with videos which are a source of noise, a better representation of the real world as compared to a simulated limited environment where external real-world factors are not present. She then talks about multi-task RL in photorealistic simulators to incorporate noise. The simulator is an emulator built from images videos taken from real homes. Environments created are completely photorealistic but have properties of the real world, for example, mirror reflection. Working in the real world is very different than a limited simulation. For one, a lot more data is required to represent the real world as compared to a simulation. The talk ends with a message that science is not a competitive sport but is a collective institution that aims to understand and explain. There is an ICLR reproducibility challenge where you can join. The goal is to get community members to try and reproduce the empirical results presented in a paper, it is on an open review basis. Last year, 80% changed their paper with the feedback given by contributors who tested a given paper. Head over to NeurIPS facebook page for the entire lecture and other sessions from the conference. How NeurIPS 2018 is taking on its diversity and inclusion challenges NeurIPS 2018: Rethinking transparency and accountability in machine learning Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference
Read more
  • 0
  • 0
  • 19612
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-10-to-dos-for-industrial-internet-architects
Aaron Lazar
24 Jan 2018
4 min read
Save for later

10 To-dos for Industrial Internet Architects

Aaron Lazar
24 Jan 2018
4 min read
[box type="note" align="" class="" width=""]This is a guest post by Robert Stackowiak, a technology business strategist at the Microsoft Technology Center. Robert has co-authored the book Architecting the Industrial Internet with Shyam Nath who is the director of technology integrations for Industrial IoT at GE Digital. You may also check out our interview with Shyam for expert insights into the world of IIoT, Big Data, Artificial Intelligence and more.[/box] Just about every day, one can pick up a technology journal or view an on-line technology article about what is new in the Industrial Internet of Things (IIoT). These articles usually provide insight into IIoT solutions to business problems or how a specific technology component is evolving to provide a function that is needed. Various industry consortia, such as the Industrial Internet Consortium (IIC), provides extremely useful documentation in defining key aspects of the IIoT architecture that the architect must consider. These broad reference architecture patterns have also begun to consistently include specific technologies and common components. The authors of the title Architecting the Industrial Internet felt the time was right for a practical guide for architects.The book provides guidance on how to define and apply an IIoT architecture in a typical project today by describing architecture patterns. In this article, we explore ten to-dos for Industrial Internet Architects designing these solutions. Just as technology components are showing up in common architecture patterns, their justification and use cases are also being discovered through repeatable processes. The sponsorship and requirements for these projects are almost always driven by leaders in the line of business in a company. Techniques for uncovering these projects can be replicated as architects gain needed discovery skills. Industrial Internet Architects To-dos: Understand IIoT: Architects first will seek to gain an understanding of what is different about the Industrial Internet, the evolution to specific IIoT solutions, and how legacy technology footprints might fit into that architecture. Understand IIoT project scope and requirements: They next research guidance from industry consortia and gather functional viewpoints. This helps to better understand the requirements their architecture must deliver solutions to, and the scope of effort they will face. Act as a bridge between business and technical requirements: They quickly come to realize that since successful projects are driven by responding to business requirements, the architect must bridge the line of business and IT divide present in many companies. They are always on the lookout for requirements and means to justify these projects. Narrow down viable IIoT solutions: Once the requirements are gathered and a potential project appears to be justifiable, requirements and functional viewpoints are aligned in preparation for defining a solution. Evaluate IIoT architectures and solution delivery models: Time to market of a proposed Industrial Internet solution is often critical to business sponsors. Most architecture evaluations include consideration of applications or pseudo-applications that can be modified to deliver the needed solution in a timely manner. Have a good grasp of IIoT analytics: Intelligence delivered by these solutions is usually linked to the timely analysis of data streams and care is taken in defining Lambda architectures (or Lambda variations) including machine learning and data management components and where analysis and response must occur. Evaluate deployment options: Technology deployment options are explored including the capabilities of proposed devices, networks, and cloud or on-premises backend infrastructures. Assess IIoT Security considerations: Security is top of mind today and proper design includes not only securing the backend infrastructure, but also extends to securing networks and the edge devices themselves. Conform to Governance and compliance policies: The viability of the Industrial Internet solution can be determined by whether proper governance is put into place and whether compliance standards can be met. Keep up with the IIoT landscape: While relying on current best practices, the architect must keep an eye on the future evaluating emerging architecture patterns and solutions. [author title="Author’s Bio" image="http://"]Robert Stackowiak is a technology business strategist at the Microsoft Technology Center in Chicago where he gathers business and technical requirements during client briefings and defines Internet of Things and analytics architecture solutions, including those that reside in the Microsoft Azure cloud. He joined Microsoft in 2016 after a 20-year stint at Oracle where he was Executive Director of Big Data in North America. Robert has spoken at industry conferences around the world and co-authored many books on analytics and data management including Big Data and the Internet of Things: Enterprise Architecture for A New Age, published by Apress, five editions of Oracle Essentials, published by O'Reilly Media, Oracle Big Data Handbook, published by Oracle Press, Achieving Extreme Performance with Oracle Exadata, published by Oracle Press, and Oracle Data Warehousing and Business Intelligence Solutions, published by Wiley. You can follow him on Twitter at @rstackow. [/author]  
Read more
  • 0
  • 0
  • 19581

article-image-everything-know-ethereum
Packt Editorial Staff
10 Apr 2018
8 min read
Save for later

Everything you need to know about Ethereum

Packt Editorial Staff
10 Apr 2018
8 min read
Ethereum was first conceived of by Vitalik Buterin in November 2013. The critical idea proposed was the development of a Turing-complete language that allows the development of arbitrary programs (smart contracts) for Blockchain and decentralized applications. This concept is in contrast to Bitcoin, where the scripting language is limited in nature and allows necessary operations only. This is an excerpt from the second edition of Mastering Blockchain by Imram Bashir. The following table shows all the releases of Ethereum starting from the first release to the planned final release: Version Release date Olympic May, 2015 Frontier July 30, 2015 Homestead March 14, 2016 Byzantium (first phase of Metropolis) October 16, 2017 Metropolis To be released Serenity (final version of Ethereum) To be released   The first version of Ethereum, called Olympic, was released in May, 2015. Two months later, a second version was released, called Frontier. After about a year, another version named Homestead with various improvements was released in March, 2016. The latest Ethereum release is called Byzantium. This is the first part of the development phase called Metropolis. This release implemented a planned hard fork at block number 4,370,000 on October 16, 2017. The second part of this release called Constantinople is expected in 2018 but there is no exact time frame available yet. The final planned release of Ethereum is called Serenity. It's planned for Serenity to introduce the final version of PoS based blockchain instead of PoW. The yellow paper The Yellow Paper, written by Dr. Gavin Wood, serves as a formal definition of the Ethereum protocol. Anyone can implement an Ethereum client by following the protocol specifications defined in the paper. While this paper is a challenging read, especially for those who do not have a background in algebra or mathematics, it contains a complete formal specification of Ethereum. This specification can be used to implement a fully compliant Ethereum client. The list of all symbols with their meanings used in the paper is provided here with the anticipation that it will make reading the yellow paper more accessible. Once symbol meanings are known, it will be much easier to understand how Ethereum works in practice. Symbol Meaning Symbol Meaning ≡ Is defined as ≤ Less than or equal to = Is equal to Sigma, World state ≠ Is not equal to Mu, Machine state ║...║ Length of Upsilon, Ethereum state transition function Is an element of Block level state transition function Is not an element of . Sequence concatenation For all There exists Union ᴧ Contract creation function Logical AND Increment : Such that Floor, lowest element {} Set Ceiling, highest element () Function of tuple No of bytes [] Array indexing Exclusive OR Logical OR (a ,b) Real numbers >= a and < b > Is greater than Empty set, null + Addition - Subtraction ∑ Summation { Describing various cases of if, otherwise   Ethereum blockchain Ethereum, like any other blockchain, can be visualized as a transaction-based state machine. This definition is referred to in the Yellow Paper. The core idea is that in Ethereum blockchain, a genesis state is transformed into a final state by executing transactions incrementally. The final transformation is then accepted as the absolute undisputed version of the state. In the following diagram, the Ethereum state transition function is shown, where a transaction execution has resulted in a state transition: In the example above, a transfer of two Ether from address 4718bf7a to address 741f7a2 is initiated. The initial state represents the state before the transaction execution, and the final state is what the morphed state looks like. Mining plays a central role in state transition, and we will elaborate the mining process in detail in the later sections. The state is stored on the Ethereum network as the world state. This is the global state of the Ethereum blockchain. How Ethereum works from a user's perspective For all the conversation around cryptocurrencies, it's very rare for anyone to actually explain how it works from the perspective of a user. Let's take a look at how it works in practice. In this example, I'll use the example of one man (Bashir) transferring money to another (Irshad). You may also want to read our post on if Ethereum will eclipse bitcoin. For the purposes of this example, we're using Jaxx wallet. However, you can use any cryptocurrency wallet for this. First, either a user requests money by sending the request to the sender, or the sender decides to send money to the receiver. The request can be sent by sending the receivers Ethereum address to the sender. For example, there are two users, Bashir and Irshad. If Irshad requests money from Bashir, then she can send a request to Bashir by using QR code. Once Bashir receives this request he will either scan the QR code or manually type in Irshad's Ethereum address and send Ether to Irshad's address. This request is encoded as a QR code shown in the following screenshot which can be shared via email, text or any other communication methods.2. Once Bashir receives this request he will either scan this QR code or copy the Ethereum address in the Ethereum wallet software and initiate a transaction. This process is shown in the following screenshot where the Jaxx Ethereum wallet software on iOS is used to send money to Irshad. The following screenshot shows that the sender has entered both the amount and destination address for sending Ether. Just before sending the Ether the final step is to confirm the transaction which is also shown here: Once the request (transaction) of sending money is constructed in the wallet software, it is then broadcasted to the Ethereum network. The transaction is digitally signed by the sender as proof that he is the owner of the Ether. This transaction is then picked up by nodes called miners on the Ethereum network for verification and inclusion in the block. At this stage, the transaction is still unconfirmed. Once it is verified and included in the block, the PoW process begins. Once a miner finds the answer to the PoW problem, by repeatedly hashing the block with a new nonce, this block is immediately broadcasted to the rest of the nodes which then verifies the block and PoW. If all the checks pass then this block is added to the blockchain, and miners are paid rewards accordingly. Finally, Irshad gets the Ether, and it is shown in her wallet software. This is shown here: On the blockchain, this transaction is identified by the following transaction hash: 0xc63dce6747e1640abd63ee63027c3352aed8cdb92b6a02ae25225666e171009e Details regarding this transaction can be visualized from the block explorer, as shown in the following screenshot: Thiswalkthroughh should give you some idea of how it works. Different Ethereum networks The Ethereum network is a peer-to-peer network where nodes participate in order to maintain the blockchain and contribute to the consensus mechanism. Networks can be divided into three types, based on requirements and usage. These types are described in the following subsections. Mainnet Mainnet is the current live network of Ethereum. The current version of mainnet is Byzantium (Metropolis) and its chain ID is 1. Chain ID is used to identify the network. A block explorer which shows detailed information about blocks and other relevant metrics is available here. This can be used to explore the Ethereum blockchain. Testnet Testnet is the widely used test network for the Ethereum blockchain. This test blockchain is used to test smart contracts and DApps before being deployed to the production live blockchain. Because it is a test network, it allows experimentation and research. The main testnet is called Ropsten which contains all features of other smaller and special purpose testnets that were created for specific releases. For example, other testnets include Kovan and Rinkeby which were developed for testing Byzantium releases. The changes that were implemented on these smaller testnets has also been implemented on Ropsten. Now the Ropsten test network contains all properties of Kovan and Rinkeby. Private net As the name suggests, this is the private network that can be created by generating a new genesis block. This is usually the case in private blockchain distributed ledger networks, where a private group of entities start their blockchain and use it as a permissioned blockchain. The following table shows the list of Ethereum network with their network IDs. These network IDs are used to identify the network by Ethereum clients. Network name Network ID / Chain ID Ethereum mainnet 1 Morden 2 Ropsten 3 Rinkeby 4 Kovan 42 Ethereum Classic mainnet 61   You should now have a good foundation of knowledge to get started with Ethereum. To learn more about Ethereum and other cryptocurrencies, check out the new edition of Mastering Blockchain. Other posts from this book A brief history of Blockchain Write your first Blockchain: Learning Solidity Programming in 15 minutes 15 ways to make Blockchains scalable, secure and safe! What is Bitcoin
Read more
  • 0
  • 0
  • 19570

article-image-why-choose-ansible-for-your-automation-and-configuration-management-needs
Savia Lobo
03 Jul 2018
4 min read
Save for later

Why choose Ansible for your automation and configuration management needs?

Savia Lobo
03 Jul 2018
4 min read
Off late, organizations are moving towards getting their systems automated. The benefits are many. Firstly, it saves off a huge chunk of time and secondly saves investments in human resources for simple tasks such as updates and so on. Few years back, Chef and Puppet were the two popular names when asked about tools for software automation. Over the years, these have got a strong rival which has surpassed them and now sits as one of the famous tools for software automation. Ansible is the one! Ansible is an open source tool for IT configuration management, deployment, and orchestration. It is perhaps the definitive configuration management tool. Chef and Puppet may have got there first, but its rise over the last couple of years is largely down to its impressive automation capabilities. And with the demands on operations engineers and sysadmins facing constant time pressures, the need to automate isn’t “nice to have”, but a necessity. Its tagline is “allowing smart people to do smart things.” It’s hard to argue that any software should aim to do much more than that. Ansible’s rise in popularity Ansible, originated in the year 2013, is a leader in IT automation and DevOps. It was bought by Red Hat in the year 2015 to achieve their goal of creating frictionless IT. The reason Red Hat acquired Ansible was its simplicity and versatility. It got the second mover advantage of entering the DevOps world after Puppet. It meant that it can orchestrate multi-tier applications in the cloud. This results in server uptime by implementing an ‘Immutable server architecture’ for deploying, creating, delete, or migrate servers across different clouds. For those starting afresh, it is easy to write, maintain automation workflows and gives them a plethora of modules which make it easy for newbies to get started. Benefits Red Hat and its community Ansible complements Red Hat’s popular cloud products, OpenStack and OpenShift. Red Hat proved to be a complex yet safe open source software for enterprises. However, it was not easy-to-use. Due to this many developers started migrating to other cloud services for easy and simple deployment options. By adopting Ansible, Red Hat finally provided an easy option to automate and modernize theri IT solutions. Customers can now focus on automating various baseline tasks. It also aids Red Hat to refresh its traditional playbooks; it allows enterprises to use IT services and infrastructure together with the help of Ansible’s YAML. The most prominent benefit of using Ansible for both enterprises and individuals is that it is agentless. It achieves this by leveraging SSH and Windows remote Management. Both these approaches reuse connections and use minimal network traffic. The approach also has added security benefits and improves both client and central management server resource utilization. Thus, the user does not have to worry about the network or server management, and can focus on other priority tasks. What can you use it for? Easy Configurations: Ansible provides developers with easy to understand configurations; understood by both humans and machines. It also includes many modules and user-built roles. Thus, one need not start building from scratch. Application lifecycle management: One can be rest assured about their application development lifecycle with Ansible. Here, it is used for defining the application and Red Hat Ansible Tower is used for managing the entire deployment process. Continuous Delivery: Manage your business with the help of Ansible push-based architecture, which allows a more sturdy control over all the required operations. Orchestration of server configuration in batches makes it easy to roll out changes across the environment. Security and Compliance: While security policies are defined in Ansible, one can choose to integrate the process of scanning and solving issues across the site into other automated processes. Scanning of jobs and system tracking ensures that systems do not deviate from the parameters assigned. Additionally, Ansible Tower provides a secure storage for machine credentials and RBAC (role-based access control). Orchestration: It brings in a high amount of discipline and order within the environment. This ensures all application pieces work in unison and are easily manageable; despite the complexity of the said applications. Though it is popular as the IT automation tool, many organizations use it in combination with Chef and Puppet. This is because it may have scaling issues and lacks in performance for larger deployments. Don’t let that stop you from trying Ansible; it is most loved by DevOps as it is written in Python and thus it is easy to learn. Moreover, it offers a credible support and an agentless architecture, which makes it easy to control servers and much more within an application development environment. An In-depth Look at Ansible Plugins Mastering Ansible – Protecting Your Secrets with Ansible Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0
Read more
  • 0
  • 0
  • 19559

article-image-social-engineering-attacks-things-to-watch-out-while-online
Savia Lobo
16 Jul 2018
4 min read
Save for later

Social engineering attacks – things to watch out for while online

Savia Lobo
16 Jul 2018
4 min read
The rise in the adoption of the internet is directly proportional to the rise in cybersecurity attacks. We feel that just by having layers of firewall or browsing over ‘https’, where ‘s’ stands for secure will indeed secure us from all those malware from attacking our systems. We also feel safe by having Google secure all our credentials, just because it is Google! All this is a myth. In fact, the biggest loophole in security breakouts is us, humans! It is innate human nature to help out those in need or get curious over a sale or a competition that can fetch a huge sum of money. These and many other factors act as a bait using which hackers or attackers find out ways to fish account credentials. These ways lead to social engineering attacks, which if unnoticed can highly affect one’s security online. Common Social Engineering Attacks Phishing This method is analogous to fishing where the bait is laid to attract fishes. Similarly, here the bait are emails sent out to customers with a malicious attachment or a clickable link. These emails are sent across to millions of users who are tricked to log into fake versions of popular websites, for instance, IBM, Microsoft, and so on. The main aim of a phishing attack is to gain the login information for instance passwords, bank account information, and so on. However, some attacks might be targeted at specific people or organizations. Such a targeted phishing is known as spear phishing. Spear phishing is a targeted phishing attack where the attackers craft a message for a specific individual. Once the target is identified, for instance, a manager of a renowned firm, via browsing his/her profile on social media sites such as Twitter or LinkedIn. The attacker then creates a spoof email address, which makes the manager believe that it’s from his/her higher authority. The mail may comprise of questions on important credentials, which should be confidential among managers and the higher authorities. Ads Often while browsing the web, users encounter flash advertisements asking them permissions to allow a blocked cookie. However, these pop-ups can be, at times, malicious. Sometimes, these malicious ads attack the user’s browser and get them redirected to another new domain. While being in the new domain the browser window can’t be closed. In another case, instead of redirection to a new site, the malicious site appears on the current page, using an iframe in HTML. After any one of the two scenarios is successful, the attacker tries to trick the user to download a fake Flash update, prompting them to fill up information on a phishing form, or claiming that their system is affected with a malware. Lost USB Drive What would you do if you find a USB drive stranded next to a photocopy machine or near the water cooler? You would insert it into your system to find out who really the owner is. Most of us fall prey to such social help, while this is what could result into USB baiting. A social engineering attack where hackers load malicious file within the USB drive and drop it near a crowded place or library. The USB baiting also appeared in the famous American show Mr. Robot in 2016. Here, the USB key simply needed a fraction of seconds to start off using HID spoofing to gather FBI passwords. A similar flash drive attack actually took place in 2008 when an infected flash drive was plugged into a US military laptop situated in the middle east. The drive caused a digital breach within the foreign intelligence agency. How can you protect yourself from these attacks? For organizations to avoid making such huge mistakes, which can lead to huge financial loss, the employees should be given a good training program. In this training program employees can be made aware of the different kinds of social engineering attacks and the channels via which attackers can approach. One way could be giving them a hands-on experience by putting them into the attacker's shoes and letting them perform an attack. Tools such as Kali Linux could be used in order to find out ways and techniques in which hackers think and how to safeguard individual or organizational information. The following video will help you in learning how a social engineering attack works. The author has made use of Kali Linux to better explain the attack practically. YouTube has a $25 million plan to counter fake news and misinformation 10 great tools to stay completely anonymous online Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news      
Read more
  • 0
  • 1
  • 19524
article-image-guide-to-safe-cryptocurrency-trading
Guest Contributor
02 Aug 2018
8 min read
Save for later

A Guide to safe cryptocurrency trading

Guest Contributor
02 Aug 2018
8 min read
So, you’ve decided to take a leap of faith and start trading in cryptocurrency. But, do you know how to do it safely? Cryptocurrency has risen in popularity as of late- especially since its market reached half a trillion dollars in 2017! This is good news to you if you ever wanted to trade in a system that veers away from tradition or if you simply distrust the traditional market with all their brokers and bankers. Cryptocurrency trading is, however, not without risks. Hackers work hard every day to steal and scam you out of your hard-earned crypto cash by stealing or coaxing your private keys directly from you. The problem is there’s nowhere to run in case you lose your money since cryptocurrency is largely unregulated. So, should you steer clear of cryptocurrency after all? Heck, no! Read this guide and you’ll be a few steps closer to safe cryptocurrency trading in no time. Know the basics As with any endeavor that involves money, you should at least learn the basic ins and outs of cryptocurrency trading. Remember to always exercise prudence when dealing in cryptocurrency. Also, look for books or reliable sites to guide you through the various risks you might face in cryptocurrency trading. Finally, keep up to date with the latest news and trends involving cryptocurrency-related cybersecurity threats. Use a VPN Most people believe that cryptocurrencies are great for privacy because they don’t need any personal information to buy or sell. In short, they’re anonymous. But, this couldn’t be further from the truth. Cryptocurrencies are pseudonymous- not anonymous. Each coin acts as your pseudonym which means that if your transactions are ever linked to your identity (via your IP address stored in the blockchain), you’ll suddenly find yourself out in the open. A VPN hides this trail by hiding your IP address and encrypting your personal data (like your location and ISP). To ensure that your sensitive transactions (especially those made over public Wi-Fi), use only the best VPN you can afford. The keyword here is “afford”. Never use free VPNs while trading cryptocurrency because free VPNs have been known to share/sell your personal information to their partners or third parties. Worse still, these free VPNs aren’t exactly the most secure. This was the case of popular crypto service MyEtherWallet, which suffered a serious security issue after popular free VPN Hola was compromised for 5 hours. This doesn’t really come as a surprise since Hola was never a secure VPN, to begin with. Check out this Hola VPN review to see for yourself. If you want better VPN options for cryptocurrency trading, try out ZenMate and F-Secure Freedome. Install an antivirus program You can add another layer of safety by installing a high-quality antivirus program. These programs protect you from malware that could take over your computer or device. An antivirus program also protects you from ransomware which hackers use to wrest control over your computer or device by encrypting some or all of your data contained therein and keeping it in stasis until you pay the ransom- which costs $133,000 on average. Now, unlike VPNs, you can get quality protection from free antivirus programs. The best ones, so far, are Avast Free Antivirus and Bitdefender Antivirus. Keep your private key to yourself Your private key is basically the password you use to access your cryptocurrency and it’s the only thing a hacker needs to access to your cryptocurrency. Never share your private key with anyone. Don’t even show a QR code containing your private key. With that said: It’s important to note that your private key is usually stored in your cryptocurrency wallet- which is either “hot” or “cold”. A “hot” wallet is one that is always online and is always ready to use while a “cold” wallet is usually offline and only goes online when you need to use it. Hot wallets are provided by cryptocurrency exchanges when you register an account. They are easy to use and make your cryptocurrency more accessible. However, being provided by an exchange means that you might lose all the funds in that wallet if that exchange ever gets hacked- which usually results in that company shutting down (like Bitfinex, Mt. Gox, and Youbit). How do you avoid this? Easy. Just keep the exact amount you need to spend in your hot wallet and keep the rest in your cold wallet a.k.a cold storage- which, as I’ve already mentioned, is entirely offline. This way, if your hot wallet provider ever gets hacked and goes out of business, you would have only experienced a relatively lesser loss. Now, there are three types of cold wallets to choose from. When choosing which one to use, it’s always important to keep in mind your purpose and the amount of cryptocurrency you plan to keep in that wallet. That said, the three types are: Hardware wallet: By far the most popular type, this wallet takes the form of a device that you plug into your computer’s USB drive. To date, there has yet been any record of cryptocurrency being stolen from a hardware wallet- which makes it useful for when you plan to acquire large amounts of cryptocurrency. This form of cold wallet is also convenient as you don’t need to type in your details each time you buy or sell cryptocurrency. Check out this list for the best cryptocurrency hardware wallets. Paper wallet: This simply involves you printing out your public and private keys on a piece of paper, thus, preventing hackers from accessing them. However, this does make it a bit tedious to type in your keys every time you need to use them online. You also run the risk of losing all your funds if it somehow winds up in someone else’s hands. So, remember to keep your paper wallet safe and secure. Brainwallet: This type of wallet involves you keeping your keys in your brain! This is usually done by memorizing a seed phrase. This means that, as long as you don’t record your seed phrase anywhere else, you are the only one who’ll ever know your keys, thus, making this the most secure wallet of all. However, If the owner of the seed phrase ever forgets it (or worse, dies), the cryptocurrency connected to that seed phrase is lost forever. Beware of phishing Phishing attacks are usually experienced through deceptive emails and websites. This is where a hacker employs fraudulent (usually psychological) tactics to get you to divulge private details. This type of cyber attack is responsible for over $115 million in stolen Etherium just last year. Now, you might be thinking “Why don’t they just avoid suspicious emails or messages?”, right? The thing is, they’re hard to resist. If you want to avoid falling for phishing attempts, check out this post for how to tell if someone is phishing for your cryptocurrency. Trade in secure exchanges Cryptocurrencies are usually bought and sold in a cryptocurrency exchange. However, not all exchanges can be trusted as some have already been proven fake. The problem here is that there’s no inherent protection and nowhere to run to for help if you lose your money. This is because cryptocurrency is, for the most part, unregulated- although the world is starting to catch up. That said, make sure to do your research before investing your money in any cryptocurrency exchange. You can also check out these 20 security tips for a more detailed list of safe trading practices. Conclusion Cryptocurrency trading can be hard, confusing, and downright risky. But, if you follow this guide, you’re at least a few steps closer to safe cryptocurrency trading. Arm yourself with at least the basic knowledge of how cryptocurrency trading works. Don’t fall for the illusion of anonymity that has fooled others and get yourself the best VPN you can afford and remember to install a reliable antivirus program to avoid malware or ransomware. Never reveal your private key. Hot wallets are fine if they only contain the exact amount you want to spend but it’s better to keep all your keys safe in a cold wallet that fits your purpose. Be wary of suspicious sites, emails, or messages that could turn out to be phishing scams and only trade in secure cryptocurrency exchanges. About Author: Dana Jackson, an U.S. expat living in Germany and the founder of PrivacyHub. She loves all things related to security and privacy. She holds a degree in Political Science, and loves to call herself a scientist. Dana also loves morning coffee and her dog Paw.   Cryptocurrency-based firm, Tron acquires BitTorrent Can Cryptocurrency establish a new economic world order? Top 15 Cryptocurrency Trading Bots    
Read more
  • 0
  • 0
  • 19397

article-image-3-ways-jupyterlab-will-revolutionize-interactive-computing
Amey Varangaonkar
17 Nov 2017
4 min read
Save for later

3 ways JupyterLab will revolutionize Interactive Computing

Amey Varangaonkar
17 Nov 2017
4 min read
The history of the Jupyter notebook is quite interesting. It started as a spin-off project to IPython in 2011, with support for the leading languages for data science such as R, Python, and Julia. As the project grew, Jupyter’s core focus shifted to being more interactive and user-friendly. It was soon clear that Jupyter wasn’t just an extension of IPython - leading to the ‘Big Split’ in 2014. Code reusability, easy sharing, and deployment, as well as extensive support for third-party extensions - these are some of the factors which have led to Jupyter becoming the popular choice of notebook for most data professionals. And now, Jupyter plan to go a level beyond with JupyterLab - the next-gen Jupyter notebook with strong interactive and collaborative computing features. [box type="info" align="" class="" width=""] What is JupyterLab? JupyterLab is the next-generation end-user version of the popular Jupyter notebook, designed to enhance interaction and collaboration among the users. It takes all the familiar features of the Jupyter notebook and presents them through a powerful, user-friendly interface.[/box] Here are 3 ways, or reasons shall we say, to look forward to this exciting new project, and how it will change interactive computing as we know it. [dropcap]1[/dropcap] Improved UI/UX One of Jupyter’s strongest and most popular feature is that it is very user-friendly, and the overall experience of working with Jupyter is second to none. With improvements in the UI/UX, JupyterLab offers a cleaner interface, with an overall feel very similar to the current Jupyter notebooks. Although JupyterLab has been built with a web-first vision, it also provides a native Electron app that provides a simplified user experience.The other key difference is that JupyterLab is pretty command-centric, encouraging users to prefer keyboard shortcuts for quicker tasks. These shortcuts are a bit different from the other text editors and IDEs, but they are customizable. [dropcap]2[/dropcap] Better workflow support Many data scientists usually start coding on an interactive shell and then migrate their code onto a notebook for building and deployment purposes. With JupyterLab, users can perform all these activities more seamlessly and with minimal effort. It offers a document-less console for quick data exploration and offers an integrated text editor for running blocks of code outside the notebook. [dropcap]3[/dropcap] Better interactivity and collaboration Probably the defining feature which propels JupyterLab over Jupyter and the other notebooks is how interactive and collaborative it is, as compared to the other notebooks. JupyterLab has a side by side editing feature and provides a crisp layout which allows for viewing your data, the notebook, your command console and some graphical display, all at the same time. Better real-time collaboration is another big feature promised by JupyterLab, where users will be able to share their notebooks on a Google drive or Dropbox style, without having to switch over to different tool/s. It would also support a plethora of third-party extensions to this effect, with Google drive extension being the most talked about. Popular Python visualization libraries such as Bokeh will now be integrated with JupyterLab, as will extensions to view and handle different file types such as CSV for interactive rendering, and GeoJSON for geographic data structures. JupyterLab has gained a lot of traction in the last few years. While it is still some time away from being generally available, the current indicators look quite strong. With over 2,500 stars and 240 enhancement requests on GitHub already, the strong interest among the users is pretty clear. Judging by the initial impressions it has had on some users, JupyterLab hasn’t made a bad start at all, and looks well and truly set to replace the current Jupyter notebooks in the near future.
Read more
  • 0
  • 0
  • 19359

article-image-what-can-google-duplex-do-for-businesses
Natasha Mathur
16 May 2018
9 min read
Save for later

What can Google Duplex do for businesses?

Natasha Mathur
16 May 2018
9 min read
When talking about the capabilities of AI-driven digital assistants, the most talked about issue is their inability to converse in a way a real human does. The robotic tone of the virtual assistants has been limiting them from imitating real humans for a long time. And it’s not just the flat monotone. It’s about understanding the nuances of the language, pitches, intonations, sarcasm, and a lot more. Now, what if there emerges a technology that is capable of sounding and behaving almost human? Well, look no further, Google Duplex is here to dominate the world of digital assistants. Google introduced the new Duplex at Google I/O 2018, their annual developer conference, last week. But, what exactly is it? Google Duplex is a newly added feature to the famed Google assistant. Adding to the capabilities of Google assistant, it is also able to make phone calls for the users, and imitate human natural conversation almost perfectly to get the day-to-day tasks ( such as booking table reservations, hair salon appointments, etc. ) done in an easy manner. It includes pause-fillers and phrases such as “um”, “uh-huh “, and “erm” to make the conversation sound as natural as possible. Don’t believe me? Check out the audio yourself! [audio mp3="https://hub.packtpub.com/wp-content/uploads/2018/05/Google-Duplex-hair-salon.mp3"][/audio]  Google Duplex booking appointments at a hair salon [audio mp3="https://hub.packtpub.com/wp-content/uploads/2018/05/Google-Duplex-table-reservation.mp3"][/audio]  Google Duplex making table reservations at a restaurant The demo call recording video of the assistant and the business employee, presented by Sundar Pichai, Google’s CEO, during the opening keynote, befuddled the entire world about who’s the assistant and who’s the human, making it go noticeably viral. A lot of questions are buzzing around whether Google Duplex just passed the Turing Test. The Turing Test assesses a machine’s ability to present intelligence closer or equivalent to that of a human being. Did the new human sounding robot assistant pass the Turing test yet? No, but it’s certainly the voice AI that has come closest to passing it. Now how does Google Duplex work? It’s quite simple. Google Duplex finds out the information ( you need ) that isn’t out there on the internet by making a direct phone call. For instance, a restaurant has shifted location and the new address is nowhere to be found online. Google Duplex will call the restaurant and check on their new address for you. The system comes with a self-monitoring capability, helping it recognize complex tasks that it cannot accomplish on its own. Such cases are signaled to a human operator, who then takes care of the task. To get a bit technical, Google Duplex makes use of Recurrent Neural Networks ( RNNs ) which are created using TensorFlow extended ( TFX ), a machine learning platform. Duplex’s RNNs are trained using the data anonymization technique on phone conversation data. Data anonymization helps with protecting the identity of a company or an individual by removing the data sets related to them. The output of Google’s Automatic speech recognition technology, conversation history and different parameters of the conversation are used by the network. The model also makes use of hyperparameter optimization from TFX which further enhances the model. But, how does it sound natural? Google uses concatenative text to speech ( TTS ) along with synthesis TTS engine ( using Tacotron and WaveNet ) to control the intonation depending on different circumstances. Concatenative TTS is a technique that converts normal text into speech by concatenating or linking together the recorded speech pieces. Synthesis TTS engine helps developers modify the speech rate, volume, and pitch of the synthesized output. Including speech disfluencies ( “hmm”s, “erm”s, and “uh”s ) makes the Duplex sound more human. These speech disfluencies are added when very different sound units are combined in the concatenative TTS or adding synthetic waits. This allows the system to signal in a natural way that it is still processing ( equivalent to what humans do when trying to sort out their thoughts ). Also, the delay or latency should match people’s expectations. Duplex is capable of figuring out when to give slow or fast responses using low-confidence models or faster approximations. Google also found out that including more latency helps with making the conversation sound more natural. Some potential applications of Google Duplex for businesses Now that we’ve covered the what and how of this new technology, let’s look at five potential applications of Google Duplex in the immediate future. Customer Service Basic forms of AI using natural language processing ( NLP ), such as chatbots and the existing voice assistants such as Siri and Alexa are already in use within the customer care industry. Google Duplex paves the way for an even more interactive form of engaging customers and gaining information, given its spectacular human sounding capability. According to Gartner, “By 2018, 30% of our interactions with technology will be through "conversations" with smart machines”. With Google Duplex, being the latest smart machine introduced to the world, the basic operations of the customer service industry will become easier, more manageable and efficient. From providing quick solutions to the initial customer support problems and delivering internal services to the employees, Google Duplex perfectly fills the bill. And it will only get better with further advances in NLP. So far chatbots and digital assistants have been miserable at handling irate customers. I can imagine Google Duplex in John Legend’s smooth voice calming down an angry customer or even making successful sales pitches to potential leads with all its charm and suave! Of course, Duplex must undergo the right customer management training with a massive amount of quality data on what good and bad handling look like before it is ready for such a challenge. Other areas of customer service where Google Duplex can play a major role is in IT support. Instead of connecting with the human operator, the user will first get connected to Google Duplex. Thus, making the entire experience friendly and personalized from the user perspective and saving major costs for organizations. HR Department Google Duplex can also extend a helping hand in the HR department. The preliminary rounds of talent acquisition where hiring executives make phone calls to their respective candidates could be handled by Google Duplex provided it gets the right training. Making note of the basic qualifications, candidate details, and scheduling interviews are all the functions that Google Duplex should be able to do effectively. The Google Assistant can collect the information and then further rounds can be conducted by the human HR personnel. This could greatly cut down on the time expended by HR executives on the first few rounds of shortlisting. This means they are free to focus their time on other strategically important areas of hiring. Personal assistants and productivity As presented at Google I/O 2018, Google Duplex is capable of booking appointments at hair salons, booking table reservations and finding out holiday hours over the phone. It is not a stretch to therefore assume that it can also order takeaway food over a phone call, check with the delivery man regarding the order, cancel appointments, make business inquiries, etc. Apart from that, it’s a great aid for people with hearing loss issues as well as people who do not speak the local language by allowing them to carry out tasks on phone. Healthcare Industry There is already enough talk surrounding the use of Alexa, Siri, and other voice assistants in healthcare. Google Duplex is another new addition to the family. With its natural way of conversing, Duplex can: Let patients know their wait time for emergency rooms. Check with the hospital regarding their health appointments. Order the necessary equipment for hospital use. Another allied area is elder care. Google Duplex could help reduce ailments related to loneliness by engaging with the users at a more human level. It could also assist with preventive care and in the management of lifestyle diseases such as diabetes by ensuring patients continue their med intake, keep their appointments, provide emergency first aid help, call 911 etc. Real Estate Industry Duplex enabled Google Assistants will help make realtors’ task easy. Duplex can help call potential sellers and buyers, thereby, making it easy for realtors to select the respective customers. The conversation between Google Duplex ( helping a realtor ) and a customer wanting to buy a house can look something like this: Google Duplex: Hi! I heard you are house hunting. Are you looking to buy or sell a property? Customer: Hey, I’m looking to buy a home in the Washington area. Google Duplex: That’s great! What part of Washington are you looking in for? Customer:  I’m looking for a house in Seattle. 3 bedrooms and 3 baths would be fine. Google Duplex: Sure, umm, may I know your budget? Customer: Somewhere between $749,000 to $850,000, is that fine? Google Duplex: Ahh okay sure, I’ve made a note and I’ll call you once I find the right matches. Customer: Yeah, sure. Google Duplex: okay, thanks. Customer: Thanks, Bye! Google Duplex then makes a note of the details on the realtor’s phone, thereby, narrowing down the efforts made by realtors on cold calling the potential sellers to a great extent. At the same time, the broker will also receive an email with the consumer’s details and contact information for a follow-up. Every rose has its thorns. What’s Duplex’s thorny issue? With all the good hype surrounding Google Duplex, there have been some controversies regarding the ethicality of Google Duplex. Some people have questions and mixed reactions about Google Duplex fooling people of one’s identity as the voice of the Duplex differs significantly from that of a robot. A lot of talk surrounding this issue is trending on several twitter threads. It has hushed away these questions by saying how ‘transparency in technology’ is important and they are ‘designing this feature with disclosure built-in’ which will help in identifying the system. Google also mentioned how any feedback that people have regarding their new product. Google successfully managed to awe people across the globe with their new and innovative Google Duplex. But there is a still a long way to go even though Google has already taken a step ahead in an effort to better the human relationships with the machines. If you enjoyed reading this article and want to know more, check out the official Google Duplex blog post. Google’s Android Things, developer preview 8: First look Google News’ AI revolution strikes balance between personalization and the bigger picture Android P new features: artificial intelligence, digital wellbeing, and simplicity  
Read more
  • 0
  • 0
  • 19278
article-image-artificial-general-intelligence-did-it-gain-traction-in-research-in-2018
Prasad Ramesh
21 Feb 2019
4 min read
Save for later

Artificial General Intelligence, did it gain traction in research in 2018?

Prasad Ramesh
21 Feb 2019
4 min read
In 2017, we predicted that artificial general intelligence will gain traction in research and certain areas will aid towards AGI systems. The prediction was made in a set of other AI predictions in an article titled 18 striking AI Trends to watch in 2018. Let’s see how 2018 went for AGI research. Artificial general intelligence or AGI is an area of AI in which efforts are made to make machines have intelligence closer to the complex nature of human intelligence. Such a system could possibly, in theory, perform tasks that a human can with the ability to learn as it progresses through tasks, collects data/sensory input. Human intelligence also involves learning a skill and applying it to other areas. For example, if a human learns Dota 2, they can apply the same learned experience to other similar strategy games, only the UI and characters in the game that can be adopted will be different. A machine cannot do this, AI systems are trained for a specific area and the skills cannot really be transferred to another task with complete efficiency and the fear of causing technical debt. That is, a machine cannot generalize skills as a human can. Come 2018, we saw Deepmind’s AlphaZero, something that is at least beginning to show what an idea of AGI could look like. But even this is not really AGI, an AlphaZero like system may excel at playing a variety of games or even understand the rules of novel games but cannot deal with the real world and its challenges. Some groundwork and basic ideas for AGI were set in a paper by the US Air Force. Dr. Paul Yaworsky, in the paper, says that artificial general intelligence is an effort to cover the gap between lower and higher level work in AI. So to speak, try and make sense of the abstract nature of intelligence. The paper also shows an organized hierarchical model for intelligence considering the external world. One of Packt’s authors, Sudharsan Ravichandiran thinks that: “Great things are happening around RL research each and every day. Deep Meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI). Instead of creating different models to perform different tasks, with AGI, a single model can master a wide variety of tasks and mimics the human intelligence.” Honda came up with a program called Curious Minded Machine in association with MIT, University of Pennsylvania, and the University of Washington. The idea sounds simple at first - it is to build a model on how children ‘learn to learn’. But something like this which children do instinctively is a very complex task for a machine/computer with artificial intelligence. The teams will showcase their work in various fields they are working on at the end of three years since the inception of the program. There was another effort by SingularityNET and Mindfire to explore AI and “cracking the brain code”. The effort is to better understand the functioning of the human brain. Together these two companies will focus on three key areas—talent, AI services, and AI education. Mindfire Mission 2 will take place in early 2019, Switzerland. These were the areas of work we saw on AGI in 2018. There were only small steps taken towards the research direction and nothing noteworthy that gained mainstream traction. On an average, experts think AGI would take at least a 100 more years to be a reality, as per Martin Ford’s interviews with machine learning experts for his best selling book, ‘Architects of Intelligence’. OpenAI released a new language model called GPT-2 in February 2019. With just one line of words, the model can generate whole articles. The results are good enough to pass as something written by a human. This does not mean that the machine actually understands human language, it’s merely generating sentences by associating words. This development has triggered passionate discussions within the community on not just the technical merits of the findings, but also the dangers and implications of applications of such research on the larger society. Get ready to see more tangible research in AGI in the next few decades. The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments? Unity and Deepmind partner to develop Virtual worlds for advancing Artificial Intelligence
Read more
  • 0
  • 0
  • 19248

article-image-cloud-security-tips-locking-your-account-down-aws-identity-access-manger-iam
Robi Sen
15 Jul 2015
8 min read
Save for later

Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM)

Robi Sen
15 Jul 2015
8 min read
With the growth of cloud services such as Google’s Cloud Platform, Microsoft Azure, Amazon’s Web Service, and many others,developers and organizations have unprecedented access to low cost, high performance infrastructure that can scale as needed. Everyone from individuals to major companies have embraced the cloud as their platform of choice to host their IT services and applications; especially small companies and start-ups. Yet for many reasons, those who have embraced the cloud have often been slow to recognize the unique security considerations that face cloud users. When you host your own servers, the cloud operates on a shared risk model were the cloud provider focuses on providing physical security, failover, and high level network perimeter protection. The cloud user is understood to be securing their operating systems, data, applications, and the like. This means that while your cloud provider provides incredible services for your business, you are responsible for much of the security, including implementing access controls, intrusion prevention, intrusion detection, encryption, and the like. Often, because cloud services are made so accessible and easy to setup, users don’t bother to secure them, nor often even know the need to. If you’re new to the cloud and new to security, this post is for you. While we will focus on using Amazon Web Services,the basic concepts apply to most cloud services regardless of vendor. Access control Since you’re using virtual resources that are already setup, the AWS cloud, one of the most important things you need to do right away is secure access to your account and images. First, you want to lock down your AWS account. This is the login and password that you are assigned when you setup your AWS account and anyone who has access to this can purchase new services, change your services, and generally cause complete havoc for you. Indeed AWS accounts sell on hacker’s sites and Darknet sites for good money; usually so the person who buys your hacked/stolen AWS account wants to setup bit coin miners. Don’t give yours out or make it easily accessible. For example, many developers embed logins, passwords, and AWS keys into their code, which is a very bad practice, and then have their accounts compromised by criminals. The first thing you need to do after getting your Amazon login and password is to store it using a tool such as mSecure or LastPass that allows you to save them in an encrypted file or database. It should then never go into a file, document, or public place. It is also strongly advised to use Multi-Factor Authentication (MFA). Amazon allows you MFA via physical devices or straightforward smart phone applications. You can read more about Amazon’s MFA here and here. Once your AWS account information is secure you should then use AWS’s Identity and Access Management (IAM) system to give each user under your master AWS account access with specific privileges according to best practices. Even if you are the only person who uses your AWS account, you should consider using IAM to create a couple of users that have access based on their role, such as a content developer who only has the ability to move files in out of specific directories, or a developer who can start and stop instances, and the like. Then always use the role with the least privileges needed to get your work done as possible. While this might seem cumbersome, you will quickly get used to it, you will be much safer, and finally if your project grows, you will already have the groundwork to ramp up safely. Creating an IAM group and user In this section, we will create an administrator group and add ourselves as the user. If you do not currently have an AWS account, you can get a free account from AWS here. Be advised you will have to have a valid credit card and a phone number to validate your account with, but Amazon will give you the account to play with free for a year (see terms here). For this example, what you need to do is: Create an administrator group that we will give group permissions to for our AWS account’s resources Make a user for ourselves and then add the user to the administrator group Finally create a password for the user so we can access to the AWS Management Console To do this, first sign into the IAM console. Now click on the Groups link and then select Create New Group: Now name the new group Administrator and select Next Step: Next, we need to assign a group policy. You can build your own (see more information here), but this should generally be avoided until you really understand AWS security policies and AWS in general. Amazon has a number of predeveloped policy templates that work great until your applications and architecture gets more complex and grows. So for now just simply select the Administrator Access policy as shown here: You should now see a screen that shows your new policy. You can then click next and then Create Group: You should now see the new Administrator group policy under Group Name: In reality, you would probably want to create all your different group accounts and then associate your users, but for now we are just going to create the Administrator account then create a single user and add it to the Administrator group. Creating a new IAM user account Now that you have created an Administrator group, let's add a user to it. To do this, go to the navigation menu, select the user, and then click Create New Users. You should then see a new screen. You have the option to create access keys for this user. Depending on the user, you may or may not need to do this, but for now go ahead and select that option box and then select Create: IAM will now create the user and give you the option to view the new key or download and save it. Go ahead and download the credentials. Usually it’s good practice to then save those credentials into your password manager such as mSecure or LastPass and not share them with anyone except for the specific user. Once you have downloaded the userscredentials, go ahead and select Close, which will return you to the Users screen. Now click on the user you created. You should now see something like the following (the username has been removed from the figure): Now select Add User to Groups. You should now see the group listing, which only shows one if you’re following along.Now select the Administrator group and then select Add to Groups. You should be taken back to the Users content page and should now see that your user is assigned to the Administrator group. Now, staying in the same screen, scroll down until you see the Security Credentials part of the page. Now click Manage Password. You will now be asked to either select an auto-generated password or click Assign a custom password. Go ahead and create your own password and select Apply. You should be taken back to your user content screen and under security credentials section, you should now see that the password field has been updated from No to Yes. You should also strongly consider using your MFA tool, in my case the AWS virtual MFA Android application, to make the account even more secure. Summary In this article, we talked about the first step in securing your cloud services is controlling access to them. We looked at how AWS allows this via the IAM, allowing you to create groups and group security policies tied to a group, and then how to add users to the group enablingyou to secure your cloud resources based on best practices. Now that you have done that, you can go ahead and add more groups and or users to your AWS account as you need to.However, before you do that, make sure you thoroughly read the AWS IAM documentation; links are supplied at the end of the post. Resources for AWS IAM IAM User Guide Information on IAM Permissions and Policies IAM Best Practices About Author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.            
Read more
  • 0
  • 0
  • 19235
Modal Close icon
Modal Close icon