Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Artificial Intelligence

170 Articles
article-image-your-machine-learning-plotting-kill-you
Sam Wood
21 Jan 2016
4 min read
Save for later

Is Your Machine Learning Plotting To Kill You?

Sam Wood
21 Jan 2016
4 min read
Artificial Intelligence is just around the corner. Of course, it's been just around the corner for decades, but in part that's our own tendency to move the goalposts about what 'intelligence' is. Once, playing chess was one of the smartest things you could do. Now that a computer can easily beat a Grand Master, we've reclassified it as just standard computation, not requiring proper thinking skills. With the rise of deep learning and the proliferation of machine learning analytics, we edge ever closer to the moment where a computer system will be able to accomplish anything and everything better than a human can. So should we start worrying about SkyNet? Yes and no. Rule of the Human Overlords Early use of artificial intelligence will probably look a lot like how we used machine learning today. We'll see 'AI empowered humans' being the Human Overlords to their robot servants. These AI are smart enough to come up with the 'best options' to address human problems, but haven't been given the capability to execute them. Think about Google Maps - there, an extremely 'intelligent' artificial program comes up with the quickest route for you to take to get from point A to point B. But it doesn't force you to take it - you get to decide from the options offered which one will best suit your needs. This is likely what working alongside the first AI will look like. Rise of the Driverless Car The problem is that we are almost certainly going to see the power of AI increase exponentially - and any human greenlighting will become an increasingly inefficient part of the system. In much the same way that we'll let the Google Maps AI start to make decisions for us when we let it drive our driverless cars, we'll likely start turning more and more of our decisions over for AI to take responsibility for. Super smart AI will also likely be able to comprehend things that humans just can't understand. The mass of data that it's analysed will be beyond any one human to be able to judge effectively. Even today, financial algorithms are making instantaneous choices about the stock market - with humans just clicking 'yes' because the computer knows best. We've already seen electronic trading glitches leading to economic crises - six years ago! Just how much responsibility might we start turning over to smart machines? The Need to Solve Ethics If we've given power to an AI to make decisions for us, we'll want to ensure it has our best interests at heart, right? It's vital to program some sort of ethical system into our AI - the problem is, humans aren't very good at deciding what is and isn't ethical! Think about a simple and seemingly universal rule like 'Don't kill people'. Now think about all the ways we disagree about when it's okay to break that rule - in self-defence, in executing dangerous criminals, to end suffering, in combat. Imagine trying to code all of that into an AI, for every different moral variation. Arguably, it might be beyond human capacity. And as for right and wrong, well, we've had thousands of years of debate about that and we still can't agree exactly what is and isn't ethical. So how can we hope to program a morality system we'd be happy to give to an increasingly powerful AI? Avoiding SkyNet It may seem a little ridiculous to start worrying about the existential threat of AI when your machine learning algorithms keep bugging out on your constantly. And certainly, the possibilities offered by AI are amazing - more intelligence means faster, cheaper, and more effective solutions to humanity's problems. So despite the risk of us being outpaced by alien machine minds that have no concept of our human value system, we must always balance that risk against the amazing potential rewards. Perhaps what's most important is just not to be blase about what super-intelligent means for AI. And frankly, I can't remember how I lived before Google Maps.
Read more
  • 0
  • 0
  • 4012

article-image-year-machine-learning
Owen Roberts
22 Jan 2016
5 min read
Save for later

This Year in Machine Learning

Owen Roberts
22 Jan 2016
5 min read
The world of data has really boomed in the last few years. When I first joined Packt Hadoop was The Next Big Thing on the horizon and what people are now doing with all the data we have available to us was unthinkable. Even in the first few weeks of 2016 we’re already seeing machine learning being used in ways we probably wouldn’t have thought about even a few years ago – we’re using machine learning for everything from discovering a supernova that was 570 billion times brighter than the sun to attempting to predict this year’s Super Bowl winners based on past results, but So what else can we expect in the next year for machine learning and how will it affect us? Based on what we’ve seen over the last three years here are a few predictions about what we can expect to happen in 2016 (With maybe a little wishful thinking mixed in too!) Machine Learning becomes the new Cloud Not too long ago every business started noticing the cloud, and with it came a shift in how companies were structured. Infrastructure was radically adapted to take full advantage that the benefits that the cloud offers and it doesn’t look to be slowing down with Microsoft recently promising to spend over $1 billion in providing free cloud resources for non-profits. Starting this year it’s plausible that we’ll see a new drive to also bake machine learning into the infrastructure. Why? Because every company will want to jump on that machine learning bandwagon! The benefits and boons to every company are pretty enticing – ML offers everything from grandiose artificial intelligence to much more mundane such as improvements to recommendation engines and targeted ads; so don’t be surprised if this year everyone attempts to work out what ML can do for them and starts investing in it. The growth of MLaaS Last year we saw Machine Learning as a Service appear on the market in bigger numbers. Amazon, Google, IBM, and Microsoft all have their own algorithms available to customers. It’s a pretty logical move and why that’s not all surprising. Why? Well, for one thing, data scientists are still as rare as unicorns. Sure, universities are creating new course and training has become more common, but the fact remains we won’t be seeing the benefits of these initiatives for a few years. Second, setting up everything for your own business is going to be expensive. Lots of smaller companies simply don’t have the money to invest in their own personal machine learning systems right now, or have the time needed to fine tune it. This is where sellers are going to be putting their investments this year – the smaller companies who can’t afford a full ML experience without outside help. Smarter Security with better protection The next logical step in security is tech that can sense when there are holes in its own defenses and adapt to them before trouble strikes. ML has been used in one form or another for several years in fraud prevention, but in the IT sector we’ve been relying on static rules to detect attack patterns. Imagine if systems could detect irregular behavior accurately or set up risk scores dynamically in order to ensure users had the best protection they could at any time? We’re a long way from this being fool-proof unfortunately, but as the year progresses we can expect to see the foundations of this start being seen. After all, we’re already starting to talk about it. Machine Learning and Internet of Things combine We’re already nearly there, but with the rise in interest in the IoT we can expect that these two powerhouses to finally combine. The perfect dream for IoT hobbyists has always been like something out of the Jetsons or Wallace and Gromit –when you pass that sensor by the frame of your door in the morning your kettle suddenly springs to life so you’re able to have that morning coffee without waiting like the rest of us primals; but in truth the Internet of Things has the potential to be so much more than just making the lives of hobbyists much easier. By 2020 it is expected that over 25 billion ‘Things’ will be connected to the internet, and each one will be collating reams and reams of data. For a business with the capacity to process this data they can collect the insight they could collect is a huge boon for everything from new products to marketing strategy. For IoT to really live up to the dreams we have for it we need a system that can recognize and collate relevant data, which is where a ML system is sure to take center stage. Big things are happening in the world of machine learning, and I wouldn’t be surprised if something incredibly left field happens in the data world that takes us all by surprise, but what do you think is next for ML? If you’re looking to either start getting into the art of machine learning or boosting your skills to the next level then be sure to give our Machine Learning tech page a look; it’s filled our latest and greatest ML books and videos out right now along with the titles we’re realizing soon, available to preorder in your format of choice.
Read more
  • 0
  • 0
  • 3986

article-image-picking-tensorflow-can-now-pay-dividends-sooner
Sam Abrahams
23 May 2016
9 min read
Save for later

Picking up TensorFlow can now pay dividends sooner

Sam Abrahams
23 May 2016
9 min read
It's been nearly four months since TensorFlow, Google's computation graph machine learning library, was open sourced, and the momentum from its launch is still going strong. Over the time, both Microsoft and Baidu have released their own deep-learning libraries (CNTK and warp-ctc, respectively), and the machine learning arms race has escalated even further with Yahoo open sourcing CaffeOnSpark. Google hasn't been idle, however, and with the recent releases of TensorFlow Serving and the long awaited distributed runtime, now is the time for businesses and individual data scientists to ask: is it time to commit to TensorFlow? TensorFlow's most appealing features There are a lot of machine learning libraries available today—what makes TensorFlow stand out in this crowded space? 1. Flexibility without headaches TensorFlow heavily borrows concepts from the more tenured machine learning library Theano. Many models written for research papers were built in Theano, and its composable, node-by-node writing style translates well when implementing a model whose graph was drawn by hand first. TensorFlow's API is extremely similar. Both Theano and TensorFlow feature a Python API for defining the computation graph, which then hooks into high performance C/C++ implementations of mathematical operations. Both are able to automatically differentiate their graphs with respect to their inputs, which facilitates learning on complicated neural network structures and both integrate tightly with Numpy for defining tensors (n-dimensional arrays). However, one of the biggest advantages TensorFlow currently has over Theano (at least when comparing features both Theano and TensorFlow have) is its compile time. As of the time of writing this, Theano's compile times can be quite lengthy and although there are options to speed up compilation for experimentation, they come at the cost of a slower output model. TensorFlow's compilation is much faster, which leads to less headaches when trying out slightly different versions of models. 2. It's backed by Google (and the OSS community) At first, it may sound more like brand recognition than a tangible advantage, but when I say it's 'backed' by Google, what I mean is that Google is seriously pouring tons of resources into making TensorFlow an awesome tool. There is an entire team at Google dedicated on maintaining and improving the software steadily and visibly, while simultaneously running a clinic on how to properly interact with and engage the open source community. Google proved itself willing to adopt quality submissions from the community as well as flexible enough to adapt to public demands (such as moving the master contribution repository from Google's self-hosted Gerrit server to GitHub). These actions combined with genuinely constructive feedback from Google's team on pull-requests and issues helped make the community feel like this was a project worth supporting. The result? A continuous stream of little improvements and ideas from the community while the core Google team works on releasing larger features. Not only does TensorFlow recieve the benefits of a larger contributor base because of this, it also is more likely to withstand user decay as more people have invested time in making TensorFlow their own. 3. Easy visualizations and debugging with TensorBoard TensorBoard was the shiny toy that shipped on release with the first open source version of TensorFlow, but it's much more than eye candy. Not only can you use it as a guide to ensure what you've coded matches your reference model, but you can also keep track of data flowing through your model. This is especially useful when debugging subsections of your graph, as you can go in and see where any hiccups may have occurred. 4. TensorFlow Serving cuts the development-deployment cycle by nearly half The typical life cycle of machine learning models in the business world is generally as follows: Research and develop a model that is more accurate/faster/more descriptive than the previous model Write down the exact specifications of the finalized model Recreate the model in C++/C/Java/some other fast, compiled language Push the new model into deployment, replacing the old model Repeat On release, TensorFlow promised to "connect research and production." However, the community had to wait until just recently for that promise to come to fruition with TensorFlow Serving. This software allows you to run it as a server that can natively run models built in TensorFlow, which makes the new life cycle look like this: Research and develop a new model Hook the new model into TensorFlow Serving Repeat While there is overhead in learning how to use TensorFlow Serving, the process of hooking up new models stays the same, whereas rewriting new models in a different language is time consuming and difficult. 5. Distributed learning out of the box The distributed runtime is one of the newest features to be pushed to the TensorFlow repository, but it has been, by far, the most eagerly anticipated aspect of TensorFlow. Without having to incorporate any other libraries or software packages, TensorFlow is able to run distributed learning tasks on heterogenous hardware with various CPUs and GPUs. This feature is absolutely brand new (it came out in the middle of writing this post!), so do your research on how to use it and how well it runs. Areas to look for improvement TensorFlow can't claim to be the best at everything, and there are several sticking points that should be addressed sooner rather than later. Luckily, Google has been making steady improvements to TensorFlow since it was released, and I would be surprised if most of these were not remedied within the next few months. Runtime speed Although the TensorFlow team promises deployment worthy models from compiled TensorFlow code, at this time, its single machine training speed lags behind most other options. The team has made improvements in speed since its release, but there is still more work to be done. In-place operations, a more efficient node placement algorithm, and better compression techniques could help here. Distributed benchmarks are not available at this time—expect to see them after the next official TensorFlow release. Pre-trained models Libraries such as Caffe, Torch, and Theano have a good selection of pre-trained, state-of-the-art models that are implemented in their library. While Google did release a version of its Inception-v3 model in TensorFlow, it needs more options to provide a starting place for more types of problems. Expanded distributed support Yes, TensorFlow did push code for it's distributed runtime, but it still needs better documentation as well as more examples. I'm incredibly excited that it's available to try out right now, but it's going to take some time for most people to put it into production. Interested in getting up and running with TensorFlow? You'll need a primer on Python. Luckily, our Python Fundamentals course in Mapt gives you an accessible yet comprehensive journey through Python - and this week it's completely free. Click here, login, then get stuck in... The future Most people want to use software that is going to last for more than a few months—what does the future look like for TensorFlow? Here are my predictions about the medium-term future of the library. Enterprise-level distributions Just as Hadoop has commercial distributions of its software, I expect to see more and more companies offering supported suites that tie into TensorFlow. Whether they have more pre-trained models built on top of Keras (which already supports a TensorFlow backend), or make TensorFlow work seamlessly with a distributed file system like Hadoop, I forsee a lot of demand for enterprise features and support with TensorFlow. TensorFlow's speed will catch up (and most users won't need it) As mentioned earlier, TensorFlow still lags behind many other libraries out there. However, with the improvements already made; it's clear that Google is determined to make TensorFlow as efficient as possible. That said, I believe most applications of TensorFlow won't desperately need the speed increase. Of course, it's nice to have your models run faster, but most businesses out there don't have petabytes of useful data to work with, which means that model training usually doesn't take the "weeks" that we often see claimed as training time. TensorFlow is going to get easier, not more difficult, over time While there are definitely going to be many new features in upcoming releases of TensorFlow, I expect to see the learning curve of the software go down as more resources, such as tutorials, examples, and books are made available. The documentation's terminology has already changed in places to be more understandable; navigation within the documentation should improve over time. Finally, while most of the latest features in TensorFlow don't have the friendliest APIs right now, I'd be shocked if more user-friendly versions of TensorFlow Serving and the distributed runtime weren't in the works right now. Should I use TensorFlow? TensorFlow appears primed to fulfil the promise that was made back in November: a distributed, flexible data flow graph library that excels at neural network composition. I leave it to you decision makers to figure out whether TensorFlow is the right move for your own machine learning tasks, but here is my overall impression of TensorFlow: no other machine learning framework targeted at production-level tasks is as flexible, powerful, or improving as rapidly as TensorFlow. While other frameworks may carry advantages over TensorFlow now, Google is putting the effort into making consistent improvements, which bodes well for a community that is still in its infancy. About the author Sam Abrahams is a freelance data engineer and animator in Los Angeles, CA. He specializes in real-world applications of machine learning and is a contributor to TensorFlow. Sam runs a small tech blog, Memdump, and is an active member of the local hacker scene in West LA.
Read more
  • 0
  • 0
  • 3969

article-image-fat-2018-conference-session-1-summary-online-discrimination-and-privacy
Aarthi Kumaraswamy
26 Feb 2018
5 min read
Save for later

FAT* 2018 Conference Session 1 Summary: Online Discrimination and Privacy

Aarthi Kumaraswamy
26 Feb 2018
5 min read
The FAT* 2018 Conference on Fairness, Accountability, and Transparency is a first-of-its-kind international and interdisciplinary peer-reviewed conference that seeks to publish and present work examining the fairness, accountability, and transparency of algorithmic systems. This article covers research papers dedicated to 1st Session on Online discrimination and Privacy. FAT*  hosted the presentation of research work from a wide variety of disciplines, including computer science, statistics, the social sciences, and law. It took place on February 23 and 24, 2018, at the New York University Law School, in cooperation with its Technology Law and Policy Clinic. The conference brought together over 450 attendees from academic researchers, policymakers, and Machine learning practitioners. It witnessed 17 research papers, 6 tutorials, and 2 keynote presentations from leading experts in the field.  Session 1 explored ways in which online discrimination can happen and privacy could be compromised. The papers presented look for novel and practical solutions to some of the problems identified. We attempt to introduce our readers to the papers that will be presented at FAT* 2018 in this area thereby summarising the key challenges and questions explored by leading minds on the topic and their proposed potential answers to those issues. Session Chair: Joshua Kroll (University of California, Berkeley) Paper 1: Potential for Discrimination in Online Targeted Advertising Problems identified in the paper: Much recent work has focused on detecting instances of discrimination in online services ranging from discriminatory pricing on e-commerce and travel sites like Staples (Mikians et al., 2012) and Hotels.com (Hannák et al., 2014) to discriminatory prioritization of service requests and offerings from certain users over others in crowdsourcing and social networking sites like TaskRabbit (Hannák et al., 2017). In this paper, we focus on the potential for discrimination in online advertising, which underpins much of the Internet’s economy. Specifically, we focus on targeted advertising, where ads are shown only to a subset of users that have attributes (features) selected by the advertiser. Key Takeaways: A malicious advertiser can create highly discriminatory ads without using sensitive attributes such as gender or race. The current methods used to counter the problem are insufficient. The potential for discrimination in targeted advertising arises from the ability of an advertiser to use the extensive personal (demographic, behavioral, and interests) data that ad platforms gather about their users to target their ads. Different targeting methods offered by Facebook: attribute-based targeting, PII-based (custom audience) targeting and Look-alike audience targeting Three basic approaches to quantifying discrimination and their tradeoffs: Based on advertiser’s intent Based on ad targeting process Based on the targeted audience (outcomes) Paper 2: Discrimination in Online Personalization: A Multidisciplinary Inquiry The authors explore ways in which discrimination may arise in the targeting of job-related advertising, noting the potential for multiple parties to contribute to its occurrence. They then examine the statutes and case law interpreting the prohibition on advertisements that indicate a preference based on protected class and consider its application to online advertising. This paper provides a legal analysis of a real case, which found that simulated users selecting a gender in Google’s Ad Settings produces employment-related advertisements differing rates along gender lines despite identical web browsing patterns. Key Takeaways: The authors’ analysis of existing case law concludes that Section 230 may not immunize advertising platforms from liability under the FHA for algorithmic targeting of advertisements that indicate a preference for or against a protected class. Possible causes of ad targeting: Targeting was fully a product of the advertiser selecting gender segmentation. Targeting was fully a product of machine learning—Google alone selects gender. Targeting was fully a product of the advertiser selecting keywords. Targeting was fully the product of the advertiser being outbid for women. Given the limited scope of Title VII the authors conclude that Google is unlikely to face liability on the facts presented by Datta et al. Thus, the advertising prohibition of Title VII, like the prohibitions on discriminatory employment practices, is ill-equipped to advance the aims of equal treatment in a world where algorithms play an increasing role in decision making. Paper 3: Privacy for All: Ensuring Fair and Equitable Privacy Protections In this position paper, the authors argue for applying recent research on ensuring sociotechnical systems are fair and non-discriminatory to the privacy protections those systems may provide. Just as algorithmic decision-making systems may have discriminatory outcomes even without explicit or deliberate discrimination, so also privacy regimes may disproportionately fail to protect vulnerable members of their target population, resulting in disparate impact with respect to the effectiveness of privacy protections. Key Takeaways: Research questions posed: Are technical or non-technical privacy protection schemes fair? When and how do privacy protection technologies or policies improve or impede the fairness of systems they affect? When and how do fairness-enhancing technologies or policies enhance or reduce the privacy protections of the people involved? Data linking can lead to deanonymization; live recommenders can also be attacked to leak information The authors propose a new definition for a fair privacy scheme: a privacy scheme is (group-)fair if the probability of failure and expected risk are statistically independent of the subject’s membership in a protected class.   If you have missed Session 2, Session 3, Session 4 and Session 5 of the FAT* 2018 Conference, we have got you covered.
Read more
  • 0
  • 0
  • 3794

article-image-packt-explains-deep-learning
Packt Publishing
29 Feb 2016
1 min read
Save for later

Packt Explains... Deep Learning

Packt Publishing
29 Feb 2016
1 min read
If you've been looking into the world of Machine Learning lately you might have heard about a mysterious thing called “Deep Learning”. But just what is Deep Learning, and what does it mean for the world of Machine Learning as a whole? Take less than two minutes out of your day to find out and fully realize the awesome potential Deep Learning has with this video today. Plus, if you’re already in love with Deep Learning, or want to finally start your Deep Learning journey then be sure to pick up one of our recommendations below and get started right now.
Read more
  • 0
  • 0
  • 3415
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Modal Close icon
Modal Close icon