Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-9-reasons-why-rust-programmers-love-rust
Richa Tripathi
03 Oct 2018
8 min read
Save for later

9 reasons why Rust programmers love Rust

Richa Tripathi
03 Oct 2018
8 min read
The 2018 survey of the RedMonk Programming Language Rankings marked the entry of a new programming language in their Top 25 list. It has been an incredibly successful year for the Rust programming language in terms of its popularity. It also jumped from the 46th most popular language on GitHub to the 18th position. The Stack overflow survey of 2018 is another indicator of the rise of Rust programming language. Almost 78% of the developers who are working with Rust loved working on it. It topped the list of the most loved programming language among the developers who took the survey for a straight third year in the row. Not only that but it ranked 8th in the most wanted programming language in the survey, which means that the respondent of the survey who has not used it yet but would like to learn. Although, Rust was designed as a low-level language, best suited for systems, embedded, and other performance critical code, it is gaining a lot of traction and presents a great opportunity for web developers and game developers. RUST is also empowering novice developers with the tools to start shipping code fast. So, why is Rust so tempting? Let's explore the high points of this incredible language and understand the variety of features that make it interesting to learn. Automatic Garbage Collection Garbage collection and non-memory resources often create problems with some systems languages. But Rust pays no head to garbage collection and removes the possibilities of failures caused by them. In Rust, garbage collection is completely taken care of by RAII (Resource Acquisition Is Initialization). Better support for Concurrency Concurrency and parallelism are incredibly imperative topics in computer science and are also a hot topic in the industry today. Computers are gaining more and more cores, yet many programmers aren't prepared to fully utilize the power of them. Handling concurrent programming safely and efficiently is another major goal of Rust language. Concurrency is difficult to reason about. In Rust, there is a strong, static type system that helps to reason about your code. As such, Rust also gives you two traits Send and Sync to help you make sense of code that can possibly be concurrent. Rust's standard library also provides a library for threads, which enable you to run Rust code in parallel. You can also use Rust’s threads as a simple isolation mechanism. Error Handling in Rust is beautiful A programmer is bound to make errors, irrespective of the programming language they use. Making errors while programming is normal, but it's the error handling mechanism of that programming language, which enhances the experience of writing the code. In Rust, errors are divided into types: unrecoverable errors and recoverable errors. Unrecoverable errors An error is classified as 'unrecoverable' when there is no other option other than to abort the program. The panic! macro in Rust is very helpful in these cases, especially when a bug has been detected in the code but the programmer is not clear how to handle that error. The panic! macro generates a failure message that helps the user to debug a problem. It also helps to stop the execution before more catastrophic events occur. Recoverable errors The errors which can be handled easily or which do not have a serious impact on the execution of the program are known as recoverable errors. It is represented by the Result<T, E>. The Result<T, E> is an enum that consists of two variants, i.e., OK<T> and Err<E>. It describes the possible error in the program. OK<T>: The 'T' is a type of value which returns the OK variant in the success case. It is an expected outcome. Err<E>: The 'E' is a type of error which returns the ERR variant in the failure. It is an unexpected outcome. Resource Management The one attribute that makes Rust stand out (and completely overpowers Google’s Go for that matter), is the algorithm used for resource management. Rust follows the C++ lead, with concepts like borrowing and mutable borrowing on the plate and thus resource management becomes an elegant process. Furthermore, Rust didn’t need a second chance to know that resource management is not just about memory usage; the fact that they did it right first time makes them a standout performer on this point. Although the Rust documentation does a good job of explaining the technical details, the article by Tim explains the concept in a much friendlier and easy to understand language. As such I thought, it would be good to list his points as well here. The following excerpt is taken from the article written by M.Tim Jones. Reusable code via modules Rust allows you to organize code in a way that promotes its reuse. You attain this reusability by using modules which are nothing but organized code as packages that other programmers can use. These modules contain functions, structures and even other modules that you can either make public, which can be accessed by the users of the module or you can make it private which can be used only within the module and not by the module user. There are three keywords to create modules, use modules, and modify the visibility of elements in modules. The mod keyword creates a new module The use keyword allows you to use the module (expose the definitions into the scope to use them) The pub keyword makes elements of the module public (otherwise, they're private). Cleaner code with better safety checks In Rust, the compiler enforces memory safety and another checking that make the programming language safe. Here, you will never have to worry about dangling pointers or bother using an object after it has been freed. These things are part of the core Rust language that allows you to write clean code. Also, Rust includes an unsafe keyword with which you can disable checks that would typically result in a compilation error. Data types and Collections in Rust Rust is a statically typed programming language, which means that every value in Rust must have a specified data type. The biggest advantage of static typing is that a large class of errors is identified earlier in the development process. These data types can be broadly classified into two types: scalar and compound. Scalar data types represent a single value like integer, floating-point, and character, which are commonly present in other programming languages as well. But Rust also provides compound data types which allow the programmers to group multiple values in one type such as tuples and arrays. The Rust standard library provides a number of data structures which are also called collections. Collections contain multiple values but they are different from the standard compound data types like tuples and arrays which we discussed above. The biggest advantage of using collections is the capability of not specifying the amount of data at compile time which allows the structure to grow and shrink as the program runs. Vectors, Strings, and hash maps are the three most commonly used collections in Rust. The friendly Rust community Rust owes it success to the breadth and depth of engagement of its vibrant community, which supports a highly collaborative process for helping the language to evolve in a truly open-source way. Rust is built from the bottom up, rather than any one individual or organization controlling the fate of the technology. Reliable Robust Release cycles of Rust What is common between Java, Spring, and Angular? They never release their update when they promise to. The release cycle of the Rust community works with clockwork precision and is very reliable. Here’s an overview of the dates and versions: In mid-September 2018, the Rust team released Rust 2018 RC1 version. Rust 2018 is the first major new edition of Rust (after Rust 1.0 released in 2015). This new release would mark the culmination of the last three years of Rust’s development from the core team, and brings the language together in one neat package. This version includes plenty of new features like raw identifiers, better path clarity, new optimizations, and other additions. You can learn more about the Rust language and its evolution at the Rust blog and download from the Rust language website. Note: the headline was edited 09.06.2018 to make it clear that Rust was found to be the most loved language among developers using it. Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes Rust as a Game Programming Language: Is it any good? Rust Language Server, RLS 1.0 releases with code intelligence, syntax highlighting and more
Read more
  • 0
  • 3
  • 28344

article-image-intel-me-has-a-manufacturing-mode-vulnerability-and-even-giant-manufacturers-like-apple-are-not-immune-say-researchers
Savia Lobo
03 Oct 2018
4 min read
Save for later

“Intel ME has a Manufacturing Mode vulnerability, and even giant manufacturers like Apple are not immune,” say researchers

Savia Lobo
03 Oct 2018
4 min read
Yesterday, a group of European information security researchers announced that they have discovered a vulnerability in Intel’s Management Engine (Intel ME) INTEL-SA-00086. They say that the root of this problem is an undocumented Intel ME mode, specifically known as the Manufacturing Mode. Undocumented commands enable overwriting SPI flash memory and implementing the doomsday scenario. The vulnerability could locally exploit of an ME vulnerability (INTEL-SA-00086). What is Manufacturing Mode? Intel ME Manufacturing Mode is intended for configuration and testing of the end platform during manufacturing. However, this mode and its potential risks are not described anywhere in Intel's public documentation. Ordinary users do not have the ability to disable this mode since the relevant utility (part of Intel ME System Tools) is not officially available. As a result, there is no software that can protect, or even notify, the user if this mode is enabled. This mode allows configuring critical platform settings stored in one-time-programmable memory (FUSEs). These settings include those for BootGuard (the mode, policy, and hash for the digital signing key for the ACM and UEFI modules). Some of them are referred to as FPFs (Field Programmable Fuses). An output of the -FPFs option in FPT In addition to FPFs, in Manufacturing Mode the hardware manufacturer can specify settings for Intel ME, which are stored in the Intel ME internal file system (MFS) on SPI flash memory. These parameters can be changed by reprogramming the SPI flash. The parameters are known as CVARs (Configurable NVARs, Named Variables). CVARs, just like FPFs, can be set and read via FPT. Manufacturing mode vulnerability in Intel chips within Apple laptops The researchers analyzed several platforms from a number of manufacturers, including Lenovo and Apple MacBook Prо laptops. The Lenovo models did not have any issues related to Manufacturing Mode. However, they found that the Intel chipsets within the Apple laptops are running in Manufacturing Mode and was found to include the vulnerability CVE-2018-4251. This information was reported to Apple and the vulnerability was patched in June, in the macOS High Sierra update 10.13.5. By exploiting CVE-2018-4251, an attacker could write old versions of Intel ME (such as versions containing vulnerability INTEL-SA-00086) to memory without needing an SPI programmer and without physical access to the computer. Thus, a local vector is possible for exploitation of INTEL-SA-00086, which enables running arbitrary code in ME. The researchers have also stated, in the notes for the INTEL-SA-00086 security bulletin, Intel does not mention enabled Manufacturing Mode as a method for local exploitation in the absence of physical access. Instead, the company incorrectly claims that local exploitation is possible only if access settings for SPI regions have been misconfigured. How can users save themselves from this vulnerability? To keep users safe, the researchers decided to describe how to check the status of Manufacturing Mode and how to disable it. Intel System Tools includes MEInfo in order to allow obtaining thorough diagnostic information about the current state of ME and the platform overall. They demonstrated this utility in their previous research about the undocumented HAP (High Assurance Platform) mode and showed how to disable ME. The utility, when called with the -FWSTS flag, displays a detailed description of status HECI registers and the current status of Manufacturing Mode (when the fourth bit of the FWSTS status register is set, Manufacturing Mode is active). Example of MEInfo output They also created a program for checking the status of Manufacturing Mode if the user for whatever reason does not have access to Intel ME System Tools. Here is what the script shows on affected systems: mmdetect script To disable Manufacturing Mode, FPT has a special option (-CLOSEMNF) that allows setting the recommended access rights for SPI flash regions in the descriptor. Here is what happens when -CLOSEMNF is entered: Process of closing Manufacturing Mode with FPT Thus, the researchers demonstrated that Intel ME has a Manufacturing Mode problem. Even major commercial manufacturers such as Apple are not immune to configuration mistakes on Intel platforms. Also, there is no public information on the topic, leaving end users in the dark about weaknesses that could result in data theft, persistent irremovable rootkits, and even ‘bricking’ of hardware. To know about this vulnerability in detail, visit Positive research’s blog. Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets Intel faces backlash on Microcode Patches after it prohibited Benchmarking or Comparison  
Read more
  • 0
  • 0
  • 19520

article-image-how-to-deploy-serverless-applications-in-go-using-aws-lambda-tutorial
Savia Lobo
03 Oct 2018
12 min read
Save for later

How to deploy Serverless Applications in Go using AWS Lambda [Tutorial]

Savia Lobo
03 Oct 2018
12 min read
Building a serverless application allows you to focus on your application code instead of managing and operating infrastructure. If you choose AWS for this purpose, you do not have to think about provisioning or configuring servers since AWS will handle all of this for you. This reduces your infrastructure management burden and helps you get faster time-to-market. This tutorial is an excerpt taken from the book Hands-On Serverless Applications with Go written by Mohamed Labouardy. In this book, you will learn how to design and build a production-ready application in Go using AWS serverless services with zero upfront infrastructure investment. This article will cover the following points: Build, deploy, and manage our Lambda functions going through some advanced AWS CLI commands Publish multiple versions of the API Learn how to separate multiple deployment environments (sandbox, staging, and production) with aliases Cover the usage of the API Gateway stage variables to change the method endpoint's behavior. Lambda CLI commands In this section, we will go through the various AWS Lambda commands that you might use while building your Lambda functions. We will also learn how you can use them to automate your deployment process. The list-functions command As its name implies, it lists all Lambda functions in the AWS region you provided. The following command will return all Lambda functions in the North Virginia region: aws lambda list-functions --region us-east-1 For each function, the response includes the function's configuration information (FunctionName, Resources usage, Environment variables, IAM Role, Runtime environment, and so on), as shown in the following screenshot: To list only some attributes, such as the function name, you can use the query filter option, as follows: aws lambda list-functions --query Functions[].FunctionName[] The create-function command You should be familiar with this command as it has been used multiple times to create a new Lambda function from scratch. In addition to the function's configuration, you can use the command to provide the deployment package (ZIP) in two ways: ZIP file: It provides the path to the ZIP file of the code you are uploading with the --zip-file option: aws lambda create-function --function-name UpdateMovie \ --description "Update an existing movie" \ --runtime go1.x \ --role arn:aws:iam::ACCOUNT_ID:role/UpdateMovieRole \ --handler main \ --environment Variables={TABLE_NAME=movies} \ --zip-file fileb://./deployment.zip \ --region us-east-1a S3 Bucket object: It  provides the S3 bucket and object name with the --code option: aws lambda create-function --function-name UpdateMovie \ --description "Update an existing movie" \ --runtime go1.x \ --role arn:aws:iam::ACCOUNT_ID:role/UpdateMovieRole \ --handler main \ --environment Variables={TABLE_NAME=movies} \ --code S3Bucket=movies-api-deployment-package,S3Key=deployment.zip \ --region us-east-1 The as-mentioned commands will return a summary of the function's settings in a JSON format, as follows: It's worth mentioning that while creating your Lambda function, you might override the compute usage and network settings based on your function's behavior with the following options: --timeout: The default execution timeout is three seconds. When the three seconds are reached, AWS Lambda terminates your function. The maximum timeout you can set is five minutes. --memory-size: The amount of memory given to your function when executed. The default value is 128 MB and the maximum is 3,008 MB (increments of 64 MB). --vpc-config: This deploys the Lambda function in a private VPC. While it might be useful if the function requires communication with internal resources, it should ideally be avoided as it impacts the Lambda performance and scaling. AWS doesn't allow you to set the CPU usage of your function as it's calculated automatically based on the memory allocated for your function. CPU usage is proportional to the memory. The update-function-code command In addition to AWS Management Console, you can update your Lambda function's code with AWS CLI. The command requires the target Lambda function name and the new deployment package. Similarly to the previous command, you can provide the package as follows: The path to the new .zip file: aws lambda update-function-code --function-name UpdateMovie \ --zip-file fileb://./deployment-1.0.0.zip \ --region us-east-1 The S3 bucket where the .zip file is stored: aws lambda update-function-code --function-name UpdateMovie \ --s3-bucket movies-api-deployment-packages \ --s3-key deployment-1.0.0.zip \ --region us-east-1   This operation prints a new unique ID (called RevisionId) for each change in the Lambda function's code: The get-function-configuration command In order to retrieve the configuration information of a Lambda function, issue the following command: aws lambda get-function-configuration --function-name UpdateMovie --region us-east-1 The preceding command will provide the same information in the output that was displayed when the create-function command was used. To retrieve configuration information for a specific Lambda version or alias (following section), you can use the --qualifier option. The invoke command So far, we invoked our Lambda functions directly from AWS Lambda Console and through HTTP events with API Gateway. In addition to that, Lambda can be invoked from the AWS CLI with the invoke command: aws lambda invoke --function-name UpdateMovie result.json The preceding command will invoke the UpdateMovie function and save the function's output in result.json file: The status code is 400, which is normal, as UpdateFunction is expecting a JSON input. Let's see how to provide a JSON to our function with the invoke command. Head back to the DynamoDB movies table, and pick up a movie that you want to update. In this example, we will update the movie with the ID as 13, shown as follows: Create a JSON file with a body attribute that contains the new movie item attribute, as the Lambda function is expecting the input to be in the API Gateway Proxy request format: { "body": "{\"id\":\"13\", \"name\":\"Deadpool 2\"}" } Finally, run the invoke function command again with the JSON file as the input parameter: aws lambda invoke --function UpdateMovie --payload file://input.json result.json If you print the result.json content, the updated movie should be returned, shown as follows: You can verify that the movie's name is updated in the DynamoDB table by invoking the FindAllMovies function: aws lambda invoke --function-name FindAllMovies result.json The body attribute should contain the new updated movie, shown as follows: Head back to DynamoDB Console; the movie with the ID of 13 should have a new name, as shown in the following  screenshot: The delete-function command To delete a Lambda function, you can use the following command: aws lambda delete-function --function-name UpdateMovie By default, the command will delete all function versions and aliases. To delete a specific version or alias, you might want to use the --qualifier option. By now, you should be familiar with all the AWS CLI commands you might use and need while building your serverless applications in AWS Lambda. In the upcoming section, we will see how to create different versions of your Lambda functions and maintain multiple environments with aliases. Versions and aliases When you're building your serverless application, you must separate your deployment environments to test new changes without impacting your production. Therefore, having multiple versions of your Lambda functions makes sense. Versioning A version represents a state of your function's code and configuration in time. By default, each Lambda function has the $LATEST version pointing to the latest changes of your function, as shown in the following screenshot: In order to create a new version from the $LATEST version, click on Actions and Publish new version. Let's call it 1.0.0, as shown in  the next screenshot:   The new version will be created with an ID=1 (incremental). Note the ARN Lambda function at the top of the window in the following screenshot; it has the version ID: Once the version is created, you cannot update the function code, shown as follows: Moreover, advanced settings, such as IAM roles, network configuration, and compute usage, cannot be changed, shown as follows: Versions are called immutable, which means they cannot be changed once they're published; only the $LATEST version is editable. Now, we know how to publish a new version from the console. Let's publish a new version with the AWS CLI. But first, we need to update the FindAllMovies function as we cannot publish a new version if no changes were made to $LATEST since publishing version 1.0.0. The new version will have a pagination system. The function will return only the number of items requested by the user. The following code will read the Count header parameter, convert it to a number, and use the Scan operation with the Limit parameter to fetch the movies from DynamoDB: func findAll(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) { size, err := strconv.Atoi(request.Headers["Count"]) if err != nil { return events.APIGatewayProxyResponse{ StatusCode: http.StatusBadRequest, Body: "Count Header should be a number", }, nil } ... svc := dynamodb.New(cfg) req := svc.ScanRequest(&dynamodb.ScanInput{ TableName: aws.String(os.Getenv("TABLE_NAME")), Limit: aws.Int64(int64(size)), }) ... } Next, we update the FindAllMovies Lambda function's code with the update-function-code command: aws lambda update-function-code --function-name FindAllMovies \ --zip-file fileb://./deployment.zip Then, publish a new version, 1.1.0, based on the current configuration and code with the following command: aws lambda publish-version --function-name FindAllMovies --description 1.1.0 Go back to AWS Lambda Console and navigate to your FindAllMovies; a new version should be created with a new ID=2, as shown in the following screenshot: Now that our versions are created, let's test them out by using the AWS CLI invoke command. FindAllMovies v1.0.0 Invoke the FindAllMovies v1.0.0 version with its ID in the qualifier parameter with the following command: aws lambda invoke --function-name FindAllMovies --qualifier 1 result.json result.json should have all the movies in the DynamoDB movies table, shown as follows: The output showing all the movies in the DynamoDB movies tableTo know more about the output in the FindAllMovies v1.1.0 and more about Semantic versioning, head over to the book. Aliases The alias is a pointer to a specific version, it allows you to promote a function from one environment to another (such as staging to production). Aliases are mutable, unlike versions, which are immutable. To illustrate the concept of aliases, we will create two aliases, as illustrated in the following diagram: a Production alias pointing to FindAllMovies Lambda function 1.0.0 version, and a Staging alias that points to function 1.1.0 version. Then, we will configure API Gateway to use these aliases instead of the $LATEST version: Head back to the FindAllMovies configuration page. If you click on the Qualifiers drop-down list, you should see a default alias called Unqualified pointing to your $LATEST version, as shown in the following screenshot: To create a new alias, click on Actions and then Create a new alias called Staging. Select the 5 version as the target, shown as follows: Once created, the new version should be added to the list of Aliases, shown as follows: Next, create a new alias for the Production environment that points to version 1.0.0 using the AWS command line: aws lambda create-alias --function-name FindAllMovies \ --name Production --description "Production environment" \ --function-version 1 Similarly, the new alias should be successfully created: Now that our aliases have been created, let's configure the API Gateway to use those aliases with Stage variables. Stage variables Stage variables are environment variables that can be used to change the behavior at runtime of the API Gateway methods for each deployment stage. The following section will illustrate how to use stage variables with API Gateway.   On the API Gateway Console, navigate to the Movies API, click on the GET method, and update the target Lambda Function to use a stage variable instead of a hardcoded Lambda function name, as shown in the following screenshot: When you save it, a new prompt will ask you to grant the permissions to API Gateway to call your Lambda function aliases, as shown in the following screenshot: Execute the following commands to allow API Gateway to invoke the Production and Staging aliases: Production alias: aws lambda add-permission --function-name "arn:aws:lambda:us-east-1:ACCOUNT_ID:function:FindAllMovies:Production" \ --source-arn "arn:aws:execute-api:us-east-1:ACCOUNT_ID:API_ID/*/GET/movies" \ --principal apigateway.amazonaws.com \ --statement-id STATEMENT_ID \ --action lambda:InvokeFunction Staging alias: aws lambda add-permission --function-name "arn:aws:lambda:us-east-1:ACCOUNT_ID:function:FindAllMovies:Staging" \ --source-arn "arn:aws:execute-api:us-east-1:ACCOUNT_ID:API_ID/*/GET/movies" \ --principal apigateway.amazonaws.com \ --statement-id STATEMENT_ID \ --action lambda:InvokeFunction Then, create a new stage called production, as shown in next screenshot: Next, click on the Stages Variables tab, and create a new stage variable called lambda and set FindAllMovies:Production as a value, shown as follows: Do the same for the staging environment with the lambda variable pointing to the Lambda function's Staging alias, shown as follows: To test the endpoint, use the cURL command or any REST client you're familiar with. I opt for Postman. A GET method on the API Gateway's production stage invoked URL should return all the movies in the database, shown as follows: Do the same for the staging environment, with a new Header key called Count=4; you should have only four movies items in return, shown as follows: That's how you can maintain multiple environments of your Lambda functions. You can now easily promote the 1.1.0 version into production by changing the Production pointer to point to 1.1.0 instead of 1.0.0, and roll back in case of failure to the previous working version without changing the API Gateway settings. To summarize, we learned about how to deploy serverless applications using the AWS Lambda functions. If you've enjoyed reading this and want to know more about how to set up a CI/CD pipeline from scratch to automate the process of deploying Lambda functions to production with Go programming language, check out our book, Hands-On Serverless Applications with Go. Keep your serverless AWS applications secure [Tutorial] Azure Functions 2.0 launches with better workload support for serverless How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 48105

article-image-production-ready-pytorch-1-0-preview-release-is-here-with-torch-jit-c10d-distributed-library-c-api
Aarthi Kumaraswamy
02 Oct 2018
4 min read
Save for later

PyTorch 1.0 preview release is production ready with torch.jit, c10d distributed library, C++ API

Aarthi Kumaraswamy
02 Oct 2018
4 min read
Back in May, the PyTorch team shared their roadmap for PyTorch 1.0 release and highlighted that this most anticipated version will not only continue to provide stability and simplicity of use to its users, but will also make it production ready while making it a hassle-free migration experience for its users. Today, Facebook announced the release of PyTorch 1.0 RC1. The official announcement states, “PyTorch 1.0 accelerates the workflow involved in taking breakthrough research in artificial intelligence to production deployment. With deeper cloud service support from Amazon, Google, and Microsoft, and tighter integration with technology providers ARM, Intel, IBM, NVIDIA, and Qualcomm, developers can more easily take advantage of PyTorch’s ecosystem of compatible software, hardware, and developer tools. The more software and hardware that is compatible with PyTorch 1.0, the easier it will be for AI developers to quickly build, train, and deploy state-of-the-art deep learning models.” PyTorch is an open-source Python-based deep learning framework which provides powerful GPU acceleration. PyTorch is known for advanced indexing and functions, imperative style, integration support and API simplicity. This is one of the key reasons why developers prefer PyTorch for research and hackability. On the downside, it has struggled with adoption in production environments. The pyTorch team acknowledged this in their roadmap and have worked on improving this aspect significantly in pyTorch 1.0 not just in terms of improving the library but also by enriching its ecosystems but partnering with key software and hardware vendors. “One of its biggest downsides has been production-support. What we mean by production-support is the countless things one has to do to models to run them efficiently at massive scale: exporting to C++-only runtimes for use in larger projects optimizing mobile systems on iPhone, Android, Qualcomm and other systems using more efficient data layouts and performing kernel fusion to do faster inference (saving 10% of speed or memory at scale is a big win) quantized inference (such as 8-bit inference)”, stated the pyTorch team in their roadmap post. Below are some key highlights of this major milestone for PyTorch. JIT The JIT is a set of compiler tools for bridging the gap between research in PyTorch and production. It includes a language called Torch Script ( a subset of Python), and two ways (Tracing mode and Script mode) in which the existing code can be made compatible with the JIT. Torch Script code can be aggressively optimized and it can be serialized for later use in the new C++ API, which doesn't depend on Python at all. torch.distributed new "C10D" library The torch.distributed package and torch.nn.parallel.DistributedDataParallel module are backed by the new "C10D" library. The main highlights of the new library are: C10D is performance driven and operates entirely asynchronously for all backends: Gloo, NCCL, and MPI. Significant Distributed Data Parallel performance improvements especially for slower network like ethernet-based hosts Adds async support for all distributed collective operations in the torch.distributed package. Adds send and recv support in the Gloo backend C++ Frontend [API Unstable] The C++ frontend is a pure C++ interface to the PyTorch backend that follows the API and architecture of the established Python frontend. It is intended to enable research in high performance, low latency and bare metal C++ applications. It provides equivalents to torch.nn, torch.optim, torch.data and other components of the Python frontend. The C++ frontend is marked as "API Unstable" as part of PyTorch 1.0. This means it is ready to be used for building research applications, but still has some open construction sites that will stabilize over the next month or two. In other words, it is not ready for use in production, yet. N-dimensional empty tensors, a collection of new operators inspired from numpy and scipy and new distributions such as Weibull, Negative binomial and multivariate log gamma distributions have been introduced. There have also been a lot of breaking changes, bug fixes, and other improvements made to pyTorch 1.0. For more details read the official announcement and also the official release notes for pyTorch. What is PyTorch and how does it work? Build your first neural network with PyTorch [Tutorial] Is Facebook-backed PyTorch better than Google’s TensorFlow? Can a production-ready Pytorch 1.0 give TensorFlow a tough time?
Read more
  • 0
  • 0
  • 20923

article-image-what-are-rest-verbs-and-status-codes-tutorial
Sugandha Lahoti
02 Oct 2018
12 min read
Save for later

What are REST verbs and status codes [Tutorial]

Sugandha Lahoti
02 Oct 2018
12 min read
The name Representational state transfer (REST) was coined by Roy Fielding from the University of California. It is a very simplified and lightweight web service compared to SOAP. Performance, scalability, simplicity, portability, and modifiability are the main principles behind the REST design. REST is a stateless, cacheable, and simple architecture that is not a protocol but a pattern. In this tutorial, we will talk about REST verbs and status codes. The article is taken from the book Building RESTful Web services with Go by Naren Yellavula. In this book, you will explore, the necessary concepts of REST API development by building a few real-world services from scratch. REST verbs REST verbs specify an action to be performed on a specific resource or a collection of resources. When a request is made by the client, it should send this information in the HTTP request: REST verb Header information Body (optional) As we mentioned previously, REST uses the URI to decode its resource to be handled. There are quite a few REST verbs available, but six of them are used frequently. They are as follows: GET POST PUT PATCH DELETE OPTIONS If you are a software developer, you will be dealing with these six most of the time. The following table explains the operation, target resource, and what happens if the request succeeds or fails: REST Verb Action Success Failure GET Fetches a record or set of resources from the server 200 404 OPTIONS Fetches all available REST operations 200 - POST Creates a new set of resources or a resource 201 404, 409 PUT Updates or replaces the given record 200, 204 404 PATCH Modifies the given record 200, 204 404 DELETE Deletes the given resource 200 404 The numbers in the Success and Failure columns of the preceding table are HTTP status codes. Whenever a client initiates a REST operation, since REST is stateless, the client should know a way to find out whether the operation was successful or not. For that reason, HTTP has status codes for the response. REST defines the preceding status code types for a given operation. This means a REST API should strictly follow the preceding rules to achieve client-server communication. All defined REST services have the following format. It consists of the host and API endpoint. The API endpoint is the URL path which is predefined by the server. Every REST request should hit that path. A trivial REST API URI: http://HostName/API endpoint/Query(optional) Let us look at all the verbs in more detail. The REST API design starts with the definition of operations and API endpoints. Before implementing the API, the design document should list all the endpoints for the given resources. In the following section, we carefully observe the REST API endpoints using PayPal's REST API as a use case. GET A GET method fetches the given resource from the server. To specify a resource, GET uses a few types of URI queries: Query parameters Path-based parameters In case you didn't know, all of your browsing of the web is done by performing a GET request to the server. For example, if you type www.google.com, you are actually making a GET request to fetch the search page. Here, your browser is the client and Google's web server is the backend implementer of web services. A successful GET operation returns a 200 status code. Examples of path parameters: Everyone knows PayPal. PayPal creates billing agreements with companies. If you register with PayPal for a payment system, they provide you with a REST API for all your billing needs. The sample GET request for getting the information of a billing agreement looks like this: /v1/payments/billing-agreements/agreement_id. Here, the resource query is with the path parameter. When the server sees this line, it interprets it as I got an HTTP request with a need for agreement_id from the billing agreements. Then it searches through the database, goes to the billing-agreements table, and finds an agreement with the given agreement_id. If that resource exists it sends the details to copy back in response (200 OK). Or else it sends a response saying resource not found (404). Using GET, you can also query a list of resources, instead of a single one like the preceding example. PayPal's API for getting billing transactions related to an agreement can be fetched with /v1/payments/billing-agreements/transactions. This line fetches all transactions that occurred on that billing agreement. In both, the case's data is retrieved in the form of a JSON response. The response format should be designed beforehand so that the client can consume it in the agreement. Examples of query parameters are as follows: Query parameters are intended to add detailed information to identify a resource from the server. For example, take this sample fictitious API. Let us assume this API is created for fetching, creating, and updating the details of the book. A query parameter based GET request will be in this format:  /v1/books/?category=fiction&publish_date=2017 The preceding URI has few query parameters. The URI is requesting a book from the book's resource that satisfies the following conditions: It should be a fiction book The book should have been published in the year 2017 Get all the fiction books that are released in the year 2017 is the question the client is posing to the server. Path vs Query parameters—When to use them? It is a common rule of thumb that Query parameters are used to fetch multiple resources based on the query parameters. If a client needs a single resource with exact URI information, it can use Path parameters to specify the resource. For example, a user dashboard can be requested with Path parameters and fetch data on filtering can be modeled with Query parameters. Use Path parameters for a single resource and Query parameters for multiple resources in a GET request. POST, PUT, and PATCH The POST method is used to create a resource on the server. In the previous book's API, this operation creates a new book with the given details. A successful POST operation returns a 201 status code. The POST request can update multiple resources: /v1/books. The POST request has a body like this: {"name" : "Lord of the rings", "year": 1954, "author" : "J. R. R. Tolkien"} This actually creates a new book in the database. An ID is assigned to this record so that when we GET the resource, the URL is created. So POST should be done only once, in the beginning. In fact, Lord of the Rings was published in 1955. So we entered the published date incorrectly. In order to update the resource, let us use the PUT request. The PUT method is similar to POST. It is used to replace the resource that already exists. The main difference is that PUT is idempotent. A POST call creates two instances with the same data. But PUT updates a single resource that already exists: /v1/books/1256 with body that is JSON like this: {"name" : "Lord of the rings", "year": 1955, "author" : "J. R. R. Tolkien"} 1256 is the ID of the book. It updates the preceding book by year:1955. Did you observe the drawback of PUT? It actually replaced the entire old record with the new one. We needed to change a single column. But PUT replaced the whole record. That is bad. For this reason, the PATCH request was introduced. The PATCH method is similar to PUT, except it won't replace the whole record. PATCH, as the name suggests, patches the column that is being modified. Let us update the book 1256 with a new column called ISBN: /v1/books/1256 with the JSON body like this: {"isbn" : "0618640150"} It tells the server, Search for the book with id 1256. Then add/modify this column with the given value.  PUT and PATCH both return the 200 status for success and 404 for not found. DELETE and OPTIONS The DELETE API method is used to delete a resource from the database. It is similar to PUT but without any body. It just needs an ID of the resource to be deleted. Once a resource gets deleted, subsequent GET requests return a 404 not found status. Responses to this method are not cacheable (in case caching is implemented)  because the DELETE method is idempotent. The OPTIONS API method is the most underrated in the API development. Given the resource, this method tries to know all possible methods (GET, POST, and so on) defined on the server. It is like looking at the menu card at a restaurant and then ordering an item which is available (whereas if you randomly order a dish, the waiter will tell you it is not available). It is best practice to implement the OPTIONS method on the server. From the client, make sure OPTIONS is called first, and if the method is available, then proceed with it. Cross-Origin Resource Sharing (CORS) The most important application of this OPTIONS method is Cross-Origin Resource Sharing (CORS). Initially, browser security prevented the client from making cross-origin requests. It means a site loaded with the URL www.foo.com can only make API calls to that host. If the client code needs to request files or data from www.bar.com, then the second server, bar.com, should have a mechanism to recognize foo.com to get its resources. This process explains the CORS: foo.com requests the OPTIONS method on bar.com. bar.com sends a header like Access-Control-Allow-Origin: http://foo.com in response to the client. Next, foo.com can access the resources on bar.com without any restrictions that call any REST method. If bar.com feels like supplying resources to any host after one initial request, it can set Access control to * (that is, any). The following is the diagram depicting the process happening one after the other:   Types of status codes There are a few families of status codes. Each family globally explains an operation status. Each member of that family may have a deeper meeting. So a REST API should strictly tell the client what exactly happened after the operation. There are 60+ status codes available. But for REST, we concentrate on a few families of codes. 2xx family (successful) 200 and 201 fall under the success family. They indicate that an operation was successful. Plain 200 (Operation Successful) is a successful CRUD Operation: 200 (Successful Operation) is the most common type of response status code in REST 201 (Successfully Created) is returned when a POST operation successfully creates a resource on the server 204 (No content) is issued when a client needs a status but not any data back 3xx family (redirection) These status codes are used to convey redirection messages. The most important ones are 301 and 304:   301 is issued when a resource is moved permanently to a new URL endpoint. It is essential when an old API is deprecated. It returns the new endpoint in the response with the 301 status. By seeing that, the client should use the new URL in response to achieving its target. The 304 status code indicates that content is cached and no modification happened for the resource on the server. This helps in caching content at the client and only requests data when the cache is modified. 4xx family (client error) These are the standard error status codes which the client needs to interpret and handle further actions. These have nothing to do with the server. A wrong request format or ill-formed REST method can cause these errors. Of these, the most frequent status codes API developers use are 400, 401, 403, 404, and 405: 400 (Bad Request) is returned when the server cannot understand the client request. 401 (Unauthorized) is returned when the client is not sending the authorization information in the header. 403 (Forbidden) is returned when the client has no access to a certain type of resources. 404 (Not Found) is returned when the client request is on a resource that is nonexisting. 405 (Method Not Allowed) is returned if the server bans a few methods on resources. GET and HEAD are exceptions. 5xx family (server error) These are the errors from the server. The client request may be perfect, but due to a bug in the server code, these errors can arise. The commonly used status codes are 500, 501, 502, 503,  and 504: 500 (Internal Server Error) status code gives the development error which is caused by some buggy code or some unexpected condition 501 (Not Implemented) is returned when the server is no longer supporting the method on a resource 502 (Bad Gateway) is returned when the server itself got an error response from another service vendor 503 (Service Unavailable) is returned when the server is down due to multiple reasons, like a heavy load or for maintenance 504 (Gateway Timeout) is returned when the server is waiting a long time for a response from another vendor and is taking too much time to serve the client For more details on status codes, visit this link: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status In this article, we gave an introduction to the REST API and then talked about REST has verbs and status codes. We saw what a given status code refers to. Next, to dig deeper into URL routing with REST APIs, read our book Building RESTful Web services with Go. Design a RESTful web API with Java [Tutorial] What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies Building RESTful web services with Kotlin
Read more
  • 0
  • 0
  • 50831

article-image-vue-js-3-0-is-ditching-javascript-for-typescript-what-else-is-new
Bhagyashree R
01 Oct 2018
5 min read
Save for later

Vue.js 3.0 is ditching JavaScript for TypeScript. What else is new?

Bhagyashree R
01 Oct 2018
5 min read
Last week, Evan You, the creator of Vue.js gave a summary of what to expect in the coming major release of Vue.js 3.0. To provide a better support for TypeScript, the codebase is being written in TypeScript leaving behind vanilla JS. This new codebase currently targets evergreen browsers such as Google Chrome, and assumes baseline native ES2015 support. Let’s see what else we will see in this major iteration: High-level API changes Template syntax will not see much changes, except some tweaks in the scoped slots syntax. Vue.js 3.0 will come with native support for class-based components. This will provide users with an API that is pleasant to use in native ES2015 without the need of any transpilation or stage-x features. The Vue.js 3.x codebase will be written in TypeScript, providing improved support for TypeScript. Support for the 2.x object-based component format will be provided by internally transforming the object to a corresponding class. Functional components can now be plain functions, however, the async components will need to be explicitly created via a helper function. The virtual DOM format used in render functions will see major changes. Upgrading will be easier if you don’t heavily rely on handwritten (non-JSX) render functions in your app. Mixins will still be supported. Cleaner and more maintainable source code architecture To make contributing to Vue.js easier, Vue.js 3.0 is being re-written from the ground up for a cleaner and more maintainable architecture. To do this, the developers are breaking some internal functionalities into individual packages to isolate the scope of complexity. For example, the observer module will be converted to its own package, with its own public API and tests. As mentioned earlier, the codebase is being re-written in TypeScript. This makes proficiency in TypeScript a primary prerequisite for contributing to the new codebase. However, the type information and IDE support will enable a new contributor to easily make meaningful contributions. Proxy-based observation mechanism Vue.js 3.0 will come with a Proxy-based observer implementation that provides reactivity tracking with full language coverage. This aims to eliminate a number of limitations of the current implementation of Vue.js 2, which is based on Object.defineProperty: Detection of property addition or deletion Detection of Array index mutation or .length mutation Support for Map, Set, WeakMap and WeakSet Additionally, this new observer will have the following features: Exposed API for creating observables: This provides a lightweight and simple cross-component state management solution for small to medium scale scenarios. Lazy observation by default: In Vue.js 3.x, only the data used to render the initially visible part of an app will need to be observed. This will eliminate the overhead on app startup if your dataset is huge. Immutable observables: Immutable versions of a value can be created to prevent mutations even on nested properties, except when the system temporarily unlocks it internally. Better debugging capabilities: Two new hooks, renderTracked and renderTriggered are added. These will help you precisely trace when and why a component re-render is tracked or triggered. Other runtime improvements Smaller runtime The new codebase is designed to be tree-shaking friendly. The built-in components and directive runtime helpers will be imported on-demand and are tree-shakable. As a result, the constant baseline size for the new runtime is <10kb gzipped. Improved performance On initial benchmarks, the developers are observing up to 100% performance improvement across the board. Vue.js 3.0 will reduce the time spent in JavaScript when your app boots up. Built-in support for Fragments and Portals Vue 3.0 will come with built-in support for Fragments and Portals. Fragments are the components returning multiple root nodes. Portals are introduced to render a sub-tree in another part of the DOM, instead of inside the component. Improved slots mechanism All compiler-generated slots are now functions and invoked during the child component’s render call. This will ensure dependencies in slots are collected as dependencies for the child instead of the parent. This means that: When a slot content changes, only the child re-renders When the parent re-renders the child does not have to if its slot content did not change This improvement will provide even more precise change detection at the component tree level. Custom Renderer API Using this API you will be able to create custom renderers. With this API, it will be easier for the render-to-native projects like Weex and NativeScript Vue to stay up-to-date with upstream changes. This API will also make the creation of custom renderers for various other purposes trivially easier. Along with these, they have announced few compiler improvements and IE11 support. They haven’t revealed any date yet but we can expect Vue.js 3.0 to release in 2019. To know more, check out their official announcement on Medium. Vue CLI 3.0 is here as the standard build toolchain behind Vue applications Introducing Vue Native for building native mobile apps with Vue.js Testing Single Page Applications (SPAs) using Vue.js developer tools
Read more
  • 0
  • 0
  • 66006
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-tim-berners-lee-plans-to-decentralize-the-web-with-solid-an-open-source-project-for-personal-empowerment-through-data
Natasha Mathur
01 Oct 2018
4 min read
Save for later

Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data”

Natasha Mathur
01 Oct 2018
4 min read
Tim Berners-Lee, the creator of the world wide web, revealed his plans of changing the web for the better, as he launched his new startup, Inrupt, last Friday. His major goal with Inrupt is to decentralize the web and get rid of the tech giant’s monopolies (Facebook, Google, Amazon, etc) on the user data. He hopes to achieve this with Inrupt’s new open source-project, Solid, a platform built using the existing web format. Tim has been working on Solid over the recent years with collaboration from people in MIT. “I’ve always believed the web is for everyone. That's why I and others fight fiercely to protect it. The changes we’ve managed to bring have created a better and more connected world. But for all the good we’ve achieved, the web has evolved into an engine of inequity and division; swayed by powerful forces who use it for their own agendas”, said Tim on his announcement blog post. Solid to provide users with more control over their data Solid will be working on completely changing the current web model that involves users handing over their personal data to digital giants “in exchange for perceived value”. “Solid is how we evolve the web in order to restore balance - by giving every one of us complete control over data, personal or not, in a revolutionary way,” says Tim. Solid offers every user a choice regarding where data gets stored, which specific people and groups can access the select elements in a data, and which apps you use. It will enable you, your family and colleagues, to link and share the data with anyone. Also, people can look at the same data with different apps at the same time, with Solid. Solid technology is built on existing web formats, and developers will have to integrate Solid into their apps and sites. Solid hopes to empower individuals, developers and businesses across the globe with totally new ways to build and find innovative and trusted applications. “I see multiple market possibilities, including Solid apps and Solid data storage”, adds Tim. According to Katrina Brooker, writer at FastCompany, “every bit of data you create or add-on Solid exists within a Solid pod. Solid Pod is a personal online data store. These pods provide Solid users with control over their applications and information on the web. Whoever uses the Solid Platform gets a Solid identity and Solid pod”. This is how Solid will enforce “personal empowerment through data” by helping users take back the power of web from gigantic corporations. Additionally, Tim told Brooker that he’s currently working on a way to create a decentralized version of Alexa, Amazon’s popular digital assistant. “He calls it Charlie. Unlike with Alexa, on Charlie people would own all their data. That means they could trust Charlie with, for example, health records, children’s school events, or financial records. That is the kind of machine Berners-Lee hopes will spring up all over Solid to flip the power dynamics of the web from corporations to individuals”, writes Brooker. Given the recent Facebook security breach that compromised 50M user accounts, and past data misuse scandals such as the Cambridge Analytica scandal, Solid seems to be an angel in disguise that will transfer the data control power into the hands of users. However, there’s no guarantee if Solid will be widely accepted by everyone as a lot of companies on the web are extremely sensitive when it comes to their data. They might not be interested in losing control over that data. But, it’s definitely going to take us one step ahead, to a more free and open Internet. Tim has taken a sabbatical from MIT and has reduced his day-to-day involvement with the World Wide Web Consortium (W3C) to work full time on Inrupt. “It is going to take a lot of effort to build the new Solid platform and drive broad adoption but I think we have enough energy to take the world to a new tipping point,” says Tim. For more coverage on this news, check out the official Inrupt blog.
Read more
  • 0
  • 0
  • 14867

article-image-facebook-witnesses-the-biggest-security-breach-since-cambridge-analytica-50m-accounts-compromised
Sugandha Lahoti
01 Oct 2018
4 min read
Save for later

Facebook’s largest security breach in its history leaves 50M user accounts compromised

Sugandha Lahoti
01 Oct 2018
4 min read
Facebook has been going through a massive decline of trust in recent times. And to make matters worse, it has witnessed another massive security breach, last week. On Friday, Facebook announced that nearly 50M Facebook accounts have been compromised by an attack that gave hackers the ability to take over users’ accounts. This security breach has not only affected user’s Facebook accounts but also impacted other accounts linked to Facebook. This means that a hacker could have accessed any account of yours that you log into using Facebook. This security issue was first discovered by Facebook on Tuesday, September 25. The hackers have apparently exploited a series of interactions between three bugs related to Facebook’s “View As” feature that lets people see what their own profile looks like to someone else. The hackers stole Facebook access tokens to take over people’s accounts. These tokens allow an attacker to take full control of the victim’s account, including logging into third-party applications that use Facebook Login. “I’m glad we found this and fixed the vulnerability,” Mark Zuckerberg said on a conference call with reporters on Friday morning. “But it definitely is an issue that this happened in the first place. I think this underscores the attacks that our community and our services face.” As of now, this vulnerability has been fixed and Facebook has contacted law enforcement authorities. The vice-president of product management, Guy Rosen, said that Facebook was working with the FBI, but he did not comment on whether national security agencies were involved in the investigation. As a security measure, Facebook has automatically logged out 90 million Facebook users from their accounts. These included the 50 million that Facebook knows were affected and an additional 40 million that potentially could have been. This attack exploited the complex interaction of multiple issues in Facebook code. It originated from a change made to Facebook’s video uploading feature in July 2017, which impacted “View As.” Facebook says that the affected users will get a message at the top of their News Feed about the issue when they log back into the social network. The message reads, "Your privacy and security are important to us, We want to let you know about recent action we've taken to secure your account." The message is followed by a prompt to click and learn more details. Facebook has also publicly apologized stating that, “People’s privacy and security is incredibly important, and we’re sorry this happened. It’s why we’ve taken immediate action to secure these accounts and let users know what happened.” This is not the end of misery for Facebook. Some users have also tweeted that they are unable to post Facebook’s security breach coverage from The Guardian and Associated Press. When trying to share the story to their news feed, they were met with the error message which prevented them from sharing the story. The error reads, “Our security systems have detected that a lot of people are posting the same content, which could mean that it’s spam. Please try a different post.” People have criticized Facebook’s automated content flagging tools. This is an example of how it tags legitimate content as illegitimate, calling it spam. It has also previously failed to detect harassment and hate speech. However, according to updates on Facebook’s Twitter account, the bug has now been resolved. https://twitter.com/facebook/status/1045796897506516992 The security breach comes at a time when the social media company is already facing multiple criticisms over issues such as foreign election interference, misinformation and hate speech, and data privacy. Recently, an Indie Taiwanese hacker also gained popularity with his plan to take down Mark Zuckerberg’s Facebook page and broadcast it live. However, soon he grew cold feet and said he’ll refrain from doing so after receiving global attention following his announcement. "I am canceling my live feed, I have reported the bug to Facebook and I will show proof when I get a bounty from Facebook," he told Bloomberg News. It’s high time that Facebook began taking it’s user privacy seriously, probably even going in the lines of rethinking it’s algorithm and platform entirely. They should also take responsibility for the real-world consequences of actions enabled by Facebook. How far will Facebook go to fix what it broke: Democracy, Trust, Reality. WhatsApp co-founder reveals why he left Facebook; is called ‘low class’ by a Facebook senior executive. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma
Read more
  • 0
  • 0
  • 25452

article-image-ethical-dilemmas-developers-on-artificial-intelligence-products-must-consider
Amey Varangaonkar
29 Sep 2018
10 min read
Save for later

The ethical dilemmas developers working on Artificial Intelligence products must consider

Amey Varangaonkar
29 Sep 2018
10 min read
Facebook has recently come under the scanner for sharing the data of millions of users without their consent. Their use of Artificial Intelligence to predict their customers’ behavior and then to sell this information to advertisers has come under heavy criticism and has raised concerns over the privacy of users’ data. A lot of it inadvertently has to do with the ‘smart use’ of data by companies like Facebook. As Artificial Intelligence continues to revolutionize the industry, and as the applications of AI continue to rapidly grow across a spectrum of real-world domains, the need for a regulated, responsible use of AI has also become more important than ever. Several ethical questions are being asked of the way the technology is being used and how it is impacting our lives, Facebook being just one of the many examples right now. In this article, we look at some of these ethical concerns surrounding the use of AI. Infringement of users’ data privacy Probably the biggest ethical concern in the use of Artificial Intelligence and smart algorithms is the way companies are using them to gain customer insights, without getting the consent of the said customers in the first place. Tracking customers’ online activity, or using the customer information available on various social media and e-commerce websites in order to tailor marketing campaigns or advertisements that are targeted towards the customer is a clear breach of their privacy, and sometimes even amounts to ‘targeted harassment’. In the case of Facebook, for example, there have been many high profile instances of misuse and abuse of user data, such as: The recent Cambridge Analytica scandal where Facebook’s user data was misused Boston-based data analytics firm Crimson Hexagon misusing Facebook user data Facebook’s involvement in the 2016 election meddling Accusations of Facebook along with Twitter and Google having a bias against conservative views Accusation of discrimination with targeted job ads on the basis of gender and age How far will these tech giants such as Facebook go to fix what they have broken - the trust of many of its users? The European Union General Data Protection Regulation (GDPR) is a positive step to curb this malpractice. However, such a regulation needs to be implemented worldwide, which has not been the case yet. There needs to be a universal agreement on the use of public data in the modern connected world. Individual businesses and developers must be accountable and hold themselves ethically responsible when strategizing or designing these AI products, keeping the users’ privacy in mind. Risk of automation in the workplace The most fundamental ethical issue that comes up when we talk about automation, or the introduction of Artificial Intelligence in the workplace, is how it affects the role of human workers. ‘Does the AI replace them completely?’ is a common question asked by many. Also, if human effort is not going to be replaced by AI and automation, in what way will the worker’s role in the organization be affected? The World Economic Forum (WEF) recently released a Future of Jobs report in which they highlight the impact of technological advancements on the current workforce. The report states that machines will be able to do half of the current job tasks within the next 5 years. A few important takeaways from this report with regard to automation and its impact on the skilled human workers are: Existing jobs will be augmented through technology to create new tasks and resulting job roles altogether - from piloting drones to remotely monitoring patients. The inclusion of AI and smart algorithms is going to reduce the number of workers required for certain work tasks The layoffs in certain job roles will also involve difficult transitions for many workers and investment for reskilling and training, commonly referred to as collaborative automation. As we enter the age of machine augmented human productivity, employees will be trained to work along with the AI tools and systems, empowering them to work quickly and more efficiently. This will come with an additional cost of training which the organization will have to bear Artificial stupidity - how do we eliminate machine-made mistakes? It goes without saying that learning happens over time, and it is no different for AI. The AI systems are fed lots and lots of training data and real-world scenarios. Once a system is fully trained, it is then made to predict outcomes on real-world test data and the accuracy of the model is then determined and improved. It is only normal, however, that the training model cannot be fed with every possible scenario there is, and there might be cases where the AI is unprepared for or can be fooled by an unusual scenario or test-case. Some images where the deep neural network is unable to identify their pattern is an example of this. Another example would be the presence of random dots in an image that would lead the AI to think there is a pattern in an image, where there really isn’t any. Deceptive perceptions like this may lead to unwanted errors, which isn’t really the AI’s fault, it’s just the way they are trained. These errors, however, can prove costly to a business and can lead to potential losses. What is the way to eliminate these possibilities? How do we identify and weed out such training errors or inadequacies that go a long way in determining whether an AI system can work with near 100% accuracy? These are the questions that need answering. It also leads us to the next problem that is - who takes accountability for the AI’s failure? If the AI fails or misbehaves, who takes the blame? When an AI system designed to do a particular task fails to correctly perform the required task for some reason, who is responsible? This aspect needs careful consideration and planning before any AI system can be adopted, especially on an enterprise-scale. When a business adopts an AI system, it does so assuming the system is fail-safe. However, if for some reason the AI system isn’t designed or trained effectively because either: It was not trained properly using relevant datasets The AI system was not used in a relevant context and as a result, gave inaccurate predictions Any failure like this could lead to potentially millions in losses and could adversely affect the business, not to mention have adverse unintended effects on society. Who is accountable in such cases? Is it the AI developer who designed the algorithm or the model? Or is it the end-user or the data scientist who is using the tool as a customer? Clear expectations and accountabilities need to be defined at the very outset and counter-measures need to be set in place to avoid such failovers, so that the losses are minimal and the business is not impacted severely. Bias in Artificial Intelligence - A key problem that needs addressing One of the key questions in adopting Artificial Intelligence systems is whether they can be trusted to be impartial, fair or neutral. In her NIPS 2017 keynote, Kate Crawford - who is a Principal Researcher at Microsoft as well as the Co-Founder & Director of Research at the AI Now institute - argues that bias in AI cannot just be treated as a technical problem; the underlying social implications need to be considered as well. For example, a machine learning software to detect potential criminals, that tends to be biased against a particular race, raises a lot of questions on its ethical credibility. Or when a camera refuses to detect a particular kind of face because it does not fit into the standard template of a human face in its training dataset, it naturally raises the racism debate. Although the AI algorithms are designed by humans themselves, it is important that the learning data used to train these algorithms is as diverse as possible, and factors in possible kinds of variations to avoid these kinds of biases. AI is meant to give out fair, impartial predictions without any preset predispositions or bias, and this is one of the key challenges that is not yet overcome by the researchers and AI developers. The problem of Artificial Intelligence in cybersecurity As AI revolutionizes the security landscape, it is also raising the bar for the attackers. With passing time it is getting more difficult to breach security systems. To tackle this, attackers are resorting to adopting state-of-the-art machine learning and other AI techniques to breach systems, while security professionals adopt their own AI mechanisms to prevent and protect the systems from these attacks. A cybersecurity firm Darktrace reported an attack in 2017 that used machine learning to observe and learn user behavior within a network. This is one of the classic cases of facing disastrous consequences where technology falls into the wrong hands and necessary steps cannot be taken to tackle or prevent the unethical use of AI - in this case, a cyber attack. The threats posed by a vulnerable AI system with no security measures in place - it can be easily hacked into and misused, doesn’t need any new introduction. This is not a desirable situation for any organization to be in, especially when it has invested thousands or even millions of dollars into the technology. When the AI is developed, strict measures should be taken to ensure it is accessible to only a specific set of people and can be altered or changed by only its developers or by authorized personnel. Just because you can build an AI, should you? The more potent the AI becomes, the more potentially devastating its applications can be. Whether it is replacing human soldiers with AI drones, or developing autonomous weapons - the unmitigated use of AI for warfare can have consequences far beyond imagination. Earlier this year, we saw hundreds of Google employees quit the company over its ties with the Pentagon, protesting against the use of AI for military purposes. The employees were strong of the opinion that the technology they developed has no place on a battlefield, and should ideally be used for the benefit of mankind, to make human lives better. Google isn’t an isolated case of a tech giant lost in these murky waters. Microsoft employees too protested Microsoft’s collaboration with US Immigration and Customs Enforcement (ICE) over building face recognition systems for them, especially after the revelations that ICE was found to confine illegal immigrant children in cages and inhumanely separated asylum-seeking families at the US Mexican border. Amazon is also one of the key tech vendors of facial recognition software to ICE, but its employees did not openly pressure the company to drop the project. While these companies have assured their employees of no direct involvement, it is quite clear that all the major tech giants are supplying key AI technology to the government for defensive (or offensive, who knows) military measures. The secure and ethical use of Artificial Intelligence for non-destructive purposes currently remains one of the biggest challenges in its adoption today. Today, there are many risks and caveats associated with implementing an AI system. Given the tools and techniques we have at our disposal currently, it is far-fetched to think of implementing a flawless Artificial Intelligence within a given infrastructure. While we consider all the risks involved, it is also important to reiterate one important fact. When we look at the bigger picture, all technological advancements effectively translate to better lives for everyone. While AI has tremendous potential, whether its implementation is responsible is completely down to us, humans. Read more Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms New cybersecurity threats posed by artificial intelligence Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers
Read more
  • 0
  • 0
  • 28328

article-image-working-with-azure-container-service-cluster-tutorial
Savia Lobo
29 Sep 2018
5 min read
Save for later

Working with Azure container service cluster [Tutorial]

Savia Lobo
29 Sep 2018
5 min read
Azure Container Services is a new variant of the classical Azure IaaS offer from Azure and uses virtual machines as the technological base. This tutorial is an excerpt taken from the book,  Implementing Azure Solutions written by Florian Klaffenbach, Jan-Henrik Damaschke, and Oliver Michalski. In this book, you will learn how to secure a newly deployed Azure Active Directory and also learn how Azure Active Directory Synchronization could be implemented. Today, you will learn how to create a process and work with Azure container service (ACS) cluster. As an important prerequisite, to work with the cluster, you need the Master FQDN (a URL). The Master FQDN can be found in the Essentials section of the dashboard of your container service. Now that we know the important prerequisites, we can go further. Each of the three available orchestrators provides you with a work surface. To work with these UIs, you must first create an SSH tunnel. You don't know how to create an SSH tunnel? Then I want to bring you closer to the procedure by the way of an example. I assume that you have followed my earlier advice and have installed the PuTTY toolset. I also assume that your SSH key pair is available. Everything okay? Then let's start: Search for the PuTTY tool and then open the tool: On the first page, fill in the Host Name field. The hostname is composed of the admin username (from your cluster) and the Master FQDN and has the format adminusername@masterfqdn. Change the Port to 2200: Now switch to the Connection | SSH | Auth site. Here you enter the path to your SSH private key: Move to the Connection | SSH | Tunnels site. In the Add new forwarded port section, type 80 in the Source port field and localhost:80 in the Destination field. Finally press the Add button: Now go back to the first page. Press the Open button and the SSH tunnel is built up: Have you created your SSH tunnel now? If yes, we will look now at the work surfaces once. Remember, since we have created our cluster based on DC/OS, it is the UIs of DC/OS. The UIs of the other types are very similar. Let's start with the DC/OS dashboard. To reach this UI, enter the following URL into the browser of your choice: http://localhost:80/ With the DC/OS dashboard, you can monitor the performance indicators of your cluster or display the health status of individual components: If you want to see the health status of individual components, click the View all 35 Components button in the Component Health tile:   A detailed list with the corresponding status information will open: In the DC/OS dashboard, you will also find another interesting application. Simply press the Universe button in the navigation area. This starts mesosphere universe. Mesosphere universe is a package repository that contains services like Spark, Cassandra, Jenkins, and many others that can be easily deployed onto your cluster: In addition to the solutions provided by mesosphere, there are still solutions from the community. Just scroll the page down: The next area is the MARATHON orchestration platform. To reach this UI, enter the following URL into the browser of your choice: http: //localhost:80/marathon/ With this UI, you can start a new container and other types of application in the cluster. In addition, the UI also provides information on executed containers and applications and is constantly on-going for planning tasks: Two short examples: The following screenshot shows the dialog for creating a group. A group in DC/OS is a collection of apps (services, and so on) that are related to each other (for example, over the organization): The next screenshot shows the dialog for creating a New Application. An application is a long-running service that may have one or more instances that map one to one with a task: Internally, the user creates an application by providing an application definition (a JSON file). Marathon then schedules one or more application instances as tasks depending on how much the definition is specified. In the original concept of DC/OS, there is still another part, the pods. A pod is a special case of an application. A pod is also a long-running service that may have one or more instances but map one to many with collocated tasks. Pod instances may include one or more tasks that share resources (for example, IPs, ports, or volumes). Pods are currently not supported by ACS. The last area is the Mesos. Mesos is a web UI for viewing cluster state, and above all tasks. To reach this UI, enter the following URL into the browser of your choice: http: //localhost:80/mesos/ The following screenshot shows the UI for Tasks: The next screenshot shows the UI for Frameworks. A framework running in Mesos consists of two components: a scheduler that registers with the master to be offered resources, and an executor process that is launched on agent nodes to run the tasks: The next screenshot shows the UI for Agents and its conditions: The last screenshot shows the UI for Offers. An offer in Mesos is simple a resource (for example, container), and is assigned to a framework for processing: In this post, we learned how to work with clusters in Azure Container Service cluster. If you've enjoyed this post, head over to the book, Implementing Azure Solutions to learn more on how to manage, access, and secure your confidential data, you will implement storage solutions. Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1
Read more
  • 0
  • 0
  • 21726
article-image-brainnet-an-interface-to-communicate-between-human-brains-could-soon-make-telepathy-real
Sunith Shetty
28 Sep 2018
3 min read
Save for later

BrainNet, an interface to communicate between human brains, could soon make Telepathy real

Sunith Shetty
28 Sep 2018
3 min read
BrainNet provides the first multi-person brain-to-brain interface which allows a nonthreatening direct collaboration between human brains. It can help small teams collaborate to solve a range of tasks using direct brain-to-brain communication. How does BrainNet operate? The noninvasive interface combines electroencephalography (EEG) to record brain signals and transcranial magnetic stimulation (TMS) to deliver the required information to the brain. For now, the interface allows three human subjects to collaborate, handle and solve a task using direct brain-to-brain communication. Two out of three human subjects are “Senders”. The senders’ brain signals are decoded using real-time EEG data analysis. This technique allows extracting decisions which are vital in communicating in order to solve the required challenges. Let’s take an example of a Tetris-like game--where you need quick decisions to decide whether to rotate a block or drop as it is in order to fill a line. The senders’ signals (decisions) are transmitted to the third subject human brain via the Internet, the “Receiver” in this case. The decisions are sent to the receiver brain via magnetic stimulation of the occipital cortex. The receiver can’t see the game screen to decide if the rotation of the block is required. The receiver integrates the decisions received and makes an informed call using an EEG interface regarding turning the position of the block or keeping it in the same position. The second round of the game allows the senders to validate the previous move and provide the necessary feedback to the receiver’s action. How did the results look? The group of researchers has implemented this technique for the Tetris game to evaluate the performance of BrainNet considering the following factors: Group-level performance during the game True/False positive rates of subject’s decisions Mutual information between subjects This was implemented among five groups of three human brain subjects to perform the Tetris task using BrainNet interface. The average accuracy result for the task was 0.813. Furthermore, they also tried varying the information reliability by injecting artificially generated noise into one of the senders’ signals. However, the receiver was able to classify which sender is more reliable based on the information transmitted to their brains. These positive results have open the gates and the possibilities of future brain-to-brain interfaces which holds the power of enabling cooperative problem solving by humans using a "social network" of connected brains. To know more, you can refer to the research paper. Read more Diffractive Deep Neural Network (D2NN): UCLA-developed AI device can identify objects at the speed of light Baidu announces ClariNet, a neural network for text-to-speech synthesis Optical training of Neural networks is making AI more efficient
Read more
  • 0
  • 0
  • 9020

article-image-did-you-know-facebook-shares-the-data-you-share-with-them-for-security-reasons-with-advertisers
Natasha Mathur
28 Sep 2018
5 min read
Save for later

Did you know Facebook shares the data you share with them for ‘security’ reasons with advertisers?

Natasha Mathur
28 Sep 2018
5 min read
Facebook is constantly under the spotlight these days when it comes to controversies regarding user’s data and privacy. A new research paper published by the Princeton University researchers states that Facebook shares the contact information you handed over for security purposes, with their advertisers. This study was first brought to light by a Gizmodo writer, Kashmir Hill. “Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information”, writes Hill. Recently, Facebook introduced a new feature called custom audiences. Unlike traditional audiences, the advertiser is allowed to target specific users. To do so, the advertiser uploads user’s PII (personally identifiable information) to Facebook. After the uploading is done, Facebook then matches the given PII against platform users. Facebook then develops an audience that comprises the matched users and allows the advertiser to further track the specific audience. Essentially with Facebook, the holy grail of marketing, which is targeting an audience of one, is practically possible; nevermind whether that audience wanted it or not. In today’s world, different social media platforms frequently collect various kinds of personally identifying information (PII), including phone numbers, email addresses, names and dates of birth. Majority of this PII often represent extremely accurate, unique, and verified user data. Because of this, these services have the incentive to exploit and use this personal information for other purposes. One such scenario includes providing advertisers with more accurate audience targeting. The paper titled ‘Investigating sources of PII used in Facebook’s targeted advertising’ is written by Giridhari Venkatadri, Elena Lucherini, Piotr Sapiezynski, and Alan Mislove. “In this paper, we focus on Facebook and investigate the sources of PII used for its PII-based targeted advertising feature. We develop a novel technique that uses Facebook’s advertiser interface to check whether a given piece of PII can be used to target some Facebook user and use this technique to study how Facebook’s advertising service obtains users’ PII,” reads the paper. The researchers developed a novel methodology, which involved studying how Facebook obtains the PII to provide custom audiences to advertisers. “We test whether PII that Facebook obtains through a variety of methods (e.g., directly from the user, from two-factor authentication services, etc.) is used for targeted advertising, whether any such use is clearly disclosed to users, and whether controls are provided to users to help them limit such use,” reads the paper. The paper uses size estimates to study what sources of PII are used for PII-based targeted advertising. Researchers used this methodology to investigate which range of sources of PII was actually used by Facebook for its PII-based targeted advertising platform. They also examined what information gets disclosed to users and what control users have over PII. What sources of PII are actually being used by Facebook? Researchers found out that Facebook allows its users to add contact information (email addresses and phone numbers) on their profiles. While any arbitrary email address or phone number can be added, it is not displayed to other users unless verified (through a confirmation email or confirmation SMS message, respectively). This is the most direct and explicit way of providing PII to advertisers. Researchers then further moved on to examine whether PII provided by users for security purposes such as two-factor authentication (2FA) or login alerts are being used for targeted advertising. They added and verified a phone number for 2FA to one of the authors’ accounts. The added phone number became targetable after 22 days. This proved that a phone number provided for 2FA was indeed used for PII-based advertising, despite having set the privacy controls to the choice. What control do users have over PII? Facebook allows users the liberty of choosing who can see each PII listed on their profiles, the current list of possible general settings being: Public, Friends, Only Me.   Users can also restrict the set of users who can search for them using their email address or their phone number. Users are provided with the following options: Everyone, Friends of Friends, and Friends. Facebook provides users a list of advertisers who have included them in a custom audience using their contact information. Users can opt out of receiving ads from individual advertisers listed here. But, information about what PII is used by advertisers is not disclosed. What information about how Facebook uses PII gets disclosed to the users? On adding mobile phone numbers directly to one’s Facebook profile, no information about the uses of that number is directly disclosed to them. This Information is only disclosed to users when adding a number from the Facebook website. As per the research results, there’s very little disclosure to users, often in the form of generic statements that do not refer to the uses of the particular PII being collected or that it may be used to allow advertisers to target users. “Our paper highlights the need to further study the sources of PII used for advertising, and shows that more disclosure and transparency needs to be provided to the user,” says the researchers in the paper. For more information, check out the official research paper. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma How far will Facebook go to fix what it broke: Democracy, Trust, Reality Mark Zuckerberg publishes Facebook manifesto for safeguarding against political interference
Read more
  • 0
  • 0
  • 14569

article-image-implementing-identity-security-in-microsoft-azure-tutorial
Savia Lobo
28 Sep 2018
13 min read
Save for later

Implementing Identity Security in Microsoft Azure [Tutorial]

Savia Lobo
28 Sep 2018
13 min read
Security boundaries are often thought of as firewalls, IPS/IDS systems, or routers. Also, logical setups such as DMZs or other network constructs are often referred to as boundaries. But in the modern world, where many companies support dynamic work models that allow you to bring your own device (BYOD) or are heavily using online services for their work, the real boundary is the identity. Today's tutorial is an excerpt taken from the book, Implementing Azure Solutions written by Florian Klaffenbach, Jan-Henrik Damaschke, and Oliver Michalski. In this book, you will learn how to secure a newly deployed Azure Active Directory and also learn how Azure Active Directory Synchronization could be implemented. In this post, you will learn how to secure identities on Microsoft Azure. Identities are often the target of hackers as they resell them or use them to steal information. To make the attacks as hard as possible, it's important to have a well-conceived and centralized identity management. The Azure Active Directory provides that and a lot more to support your security strategy and simplify complex matters such as monitoring of privileged accounts or authentication attacks. Azure Active Directory Azure Active Directory (AAD) is a very important service that many other services are based on. It's not a directory service like many may think of when they hear the name Active Directory. The AAD is a complex structure without built-in Organizational Units (OUs) or Group Policy Objects (GPOs), but with a very high extensibility, open web standards for authorization and authentication, and a modern, (hybrid) cloud-focused approach to identity management. Azure Active Directory editions The following table describes the differences between the four Azure Active Directory editions: Services Common Basic Premium P1 Premium P2 Directory Objects X X X X User/Group Management (add/update/delete)/ User-based provisioning, Device registration X X X X Single Sign-On (SSO) X X X X Self-Service Password Change for cloud users X X X X Connect (Sync engine that extends on-premises directories to Azure Active Directory) X X X X Security/Usage Reports X X X X Group-based access management/ provisioning X X X Self-Service Password Reset for cloud users X X X Company Branding (Logon Pages/Access Panel customization) X X X Application Proxy X X X SLA 99.9% X X X Self-Service Group and app Management/Self-Service application additions/Dynamic Groups X X Self-Service Password Reset/Change/Unlock with on-premises write-back X X Multi-factor authentication (Cloud and On-premises (MFA Server)) X X MIM CAL + MIM Server X X Cloud App Discovery X X Connect Health X X Automatic password rollover for group accounts X X Identity Protection X Privileged Identity Management X Overview of Azure Active Directory editions Privileged Identity Management With the help of Azure AD Privileged Identity Management (PIM), the user can access various capabilities. These include the ability to view which users are Azure AD administrators and the possibility to enable administrative services on demand such as Office 365 or Intune. Furthermore, the user is able to receive reports about changes in administrator assignments or administrator access history. The AAD PIM allows the user to monitor the access in the organization. Additionally, it is possible to manage and control the access. Resources in Azure AD and services such as Office 365 or Microsoft Intune can be accessed. Lastly, the user can get alerts, if accesses to privileged roles are granted. Let's take a look at the PIM dashboard. To use Azure PIM, it needs to be activated first: Azure AD PIM should be found in the search easily: Azure AD PIM in the marketplace After clicking on Azure AD PIM in the marketplace, PIM will probably ask you to re-verify MFA for security reasons. To do this, the MFA token needs to be typed in, after clicking on Verify my identity: Re-verify identity for Azure AD PIM setup After a successful verification you will get an output as illustrated here: Successful re-verification Now the initial setup will start. The setup guides the user through the task of choosing all accounts in the tenant that have privileged rights. It is also possible to select them if they are eligible for requesting privileged role rights. If the wizard is completed without choosing any roles or user as eligible, it will by default assign the security administrator and privileged role administrator roles to the first user that does the PIM setup. Only with these roles it is possible to manage other privileged accounts and make them eligible or grant them rights: Setup wizard for Azure AD PIM In the following screenshot the tasks related to Privileged Identity Management are illustrated: Azure AD PIM main tasks As my subscription looks very boring after enabling Azure AD PIM I chose to show a demo picture from Microsoft that shows how Azure AD PIM could look in a real-world subscription: Azure AD PIM dashboard (https://docs.microsoft.com/en-us/azure/active-directory/media/active-directory-privileged-identity-management-configure/pim_dash.png) It's now possible to manage all chosen eligible privileged accounts and roles through Azure AD PIM. Besides removing and adding eligible users to Azure AD PIM, there is also a management of privileged roles, where the role activation setting is available. This setting is used to make privileged roles more transparent, trackable, and to implement the just-in-time (JIT) administration model. This is the activation setting blade for the Security Administrator: Role activation settings for the role Security Administrator It's also possible to use the rich monitoring and auditing capabilities of Azure AD PIM and never to lose track of the use of privileged accounts and to track misuse easily. Azure AD PIM is a very useful security feature and it is even more useful in combination with Azure AD Identity Protection. Identity protection Azure AD identity is a service that provides a central dashboard that informs administrators about potential threats to the organizations identities. It is based on behavioral analysis and it provides an overview of risks levels and vulnerabilities. The Azure AD anomaly detection capabilities are used by Azure AD Identity Protection to report suspicious activities. These enable one to identify anomalies of the organization's identity in real time, making it more powerful than the regular reporting capabilities. This system will calculate a user's risk level for each account, giving you the ability to configure risk-based policies in order to automatically protect the identities of your organization. Employing these risk-based policies among other conditional access controls provided by AAD and EMS enables an organization to provide remediation actions or block access to certain accounts. The key capabilities of Azure Identity Protection can be grouped into two phases. Detection of vulnerabilities and potential risky accounts This phase is basically about automatically classifying suspicious sign-in or user activity. It uses user-defined sign-in risk and user risk policies. These policies are described later. Another feature of this part is the automatic security recommendations (vulnerabilities) based on Azure provided rules. Investigation of potential suspicious events This part is all about investigating the alerts and events that are triggered by the risk policies. So basically, a security related person needs to review all users that got flagged based on policies and take a look at the particular risk events that triggered this alert and so contributed to the higher risk level. It's also important to define a workflow that is used to take the corresponding actions. It also needs someone who regularly investigates the vulnerability warnings and recommendations and estimates the real risk level for the organization. It's important to take a closer look at the risk policies that can be configured in Azure AD Identity Protection. We will skip the Multi-factor authentication registration configurations here. For more details on MFA, read the next paragraph. Just because it can't be said often enough, I highly recommend enforcing MFA for all accounts! The two policies we can configure are user risk policy and sign-in risk policy. The options look quite similar, but the real differentiation happens in the background: Sign-in risk policy In the following diagram, user risk policy view is illustrated: User risk policy The main differentiation between Sign-in and User risk policies is where the risk events are captured. The Sign-in policy defines what happens when a certain account appears to have a high number of suspicious sign-in events. This includes sign-in from an anonymous IP address, logins from different countries in a time frame where it would not be possible to travel to the other location, and a lot more. On the other hand, User risk policies trigger a certain amount of events that happen after the user was already logged in. Leaked credential or abnormal behavior are example risk events. Microsoft provides a guide to simulating some risks to verify that the corresponding policies trigger and events got recorded. This guide is provided at this address: https://docs.microsoft.com/en-us/azure/active-directory/active-directory-identityprotection-playbook. The interesting thing after choosing users is the Conditions setting. This setting defines the threshold of risk events that is required to trigger the policy. The different option for the threshold are Low and above, Medium and above, and High. When High is chosen, it needs much more risk events to trigger the policy, but it also has the lowest impact on users. When Low and above is chosen, the policy will trigger much more often, but the likelihood of false positives is much higher, too. Finding the right balance between security and usability is one more time the real challenge. The last option provides a preview of how many users will be affected by the new policy. This helps to review and identify possible misconfigurations of earlier steps. Multi-factor authentication Authentications that require more than a single type of verification method are defined as two-step verifications. This second layer of sign-in routine can critically improve the security of user transactions and sign-ins. This method can be described as employing two or more of the typical authentication factors, defined as either one, something you know (password), two, something you have (a trusted device such as a phone or a security token) or three, something you are (biometrics). Microsoft uses Azure MFA for this objective. The user's sign-in routine is kept simple, but Azure MFA improves safe access to data and applications. Several verification methods such as phone call, text message, or mobile app verification can help you to strengthen the authentication process. There are two pricing models for Azure MFA. One is based on users, the other is based on usage. Take your time and consider both models. Just go to the pricing calculator and calculate both to compare them. Now we will see how easy it is to activate MFA in Azure: First sign in to the old Azure portal (https://manage.windowsazure.com). After choosing the right Azure Active Directory, click on USERS: Azure Active Directory in the old Azure portal At the USERS page click on MANAGE MULTI-FACTOR AUTH at the very bottom: Multi-factor authentication settings Now a redirection to the MFA portal should take place. In this portal, the MFA management takes place. I will use my demo user Frederik to show the process of activating MFA: MFA portal Just choose the user that needs to be enabled for MFA and press Enable: Choosing users for MFA and enabling them is enabled for MFA After confirming the change, it takes a few seconds and the user is enabled for MFA. Do you really want to enable MFA? Search for the Multi-Factor Authentication in the search box and then click on it: Confirmation for enabling MFA To change the billing model for MFA, a new MFA provider needs to be created to replace the existing one. For this, the Multi-Factor Authentication resource should be created from the marketplace: MFA provider in the marketplace Click on Create button to proceed: Redirection at creation The little arrow in the box indicates a redirection to the old portal. In the old portal, the MFA provider page directly opens up. There is an exclamation mark next to the usage model. This and the text at the bottom warns that it's not possible to change the usage model afterwards: New MFA provider There are many more features related to Azure Active Directory and Identity Security that we are not able to discuss in this book. I encourage you to take a look at Azure AD Connect Health, Azure AD Cloud App Discovery, and Azure Information Protection (as part of EMS). It's important to know what services are offered and what advantages they could offer your company. This is an example dialogue that will be shown after typing the password when MFA is enabled and the Authenticator-App was chosen as the main method: MFA with authenticator-app Conditional access Another important security feature that is based on Azure Active Directory is conditional access. Although it's much more important when working with Office 365 it is also used to authenticate against Azure AD applications. A conditional access rule grants or denies access to a certain resource based on location, group membership, device state, and the application the user tries to access. After creating access rules that apply to all users who use the corresponding application, it's also possible to apply a rule to a security group or the other way around and exclude a group from applying. There are scenarios with MFA, where this could make sense. Currently, Conditional access is completely managed in the old Azure portal (https://manage.windowsazure.com). There is a conditional access feature in the new Azure portal, but it is still in preview and not supported for production. The administrator is also able to combine conditional access policies with Azure AD Multi-factor authentication (MFA). This will combine the MFA policies with those of other services such as Identity Protection or the basic MFA policy. This means that even if a user is per group excepted from authenticating with MFA to an application, all the other rules still apply. So if there is an MFA policy configured in Identity Protection that enforces MFA, the user still needs to log in using MFA. In the old portal the conditional access feature is configured on an application basis, in the new portal the conditional access rules are configured and managed in the Azure Active Directory resource: Per application management old Azure Portal Following screenshot shows view of new Azure AD: Central management in Azure Active Directory in the new portal To summarize, in this tutorial, we learned how to secure Azure identities from hackers. If you've enjoyed reading this post, do check out our book, Implementing Azure Solutions to manage, access, and secure your confidential data and to implement storage solutions. Azure Functions 2.0 launches with better workload support for serverless Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Automate tasks using Azure PowerShell and Azure CLI [Tutorial]
Read more
  • 0
  • 0
  • 23982
article-image-9-recommended-blockchain-online-courses
Guest Contributor
27 Sep 2018
7 min read
Save for later

9 recommended blockchain online courses

Guest Contributor
27 Sep 2018
7 min read
Blockchain is reshaping the world as we know it. And we are not talking metaphorically because the new technology is really influencing everything from online security and data management to governance and smart contracting. Statistical reports support these claims. According to the study, the blockchain universe grows by over 40% annually, while almost 70% of banks are already experimenting with this technology. IT experts at the Editing AussieWritings.com Services claim that the potential in this field is almost limitless: “Blockchain offers a myriad of practical possibilities, so you definitely want to get acquainted with it more thoroughly.” Developers who are curious about blockchain can turn it into a lucrative career opportunity since it gives them the chance to master the art of cryptography, hierarchical distribution, growth metrics, transparent management, and many more. There were 5,743 mostly full-time job openings calling for blockchain skills in the last 12 months - representing the 320% increase - while the biggest freelancing website Upwork reported more than 6,000% year-over-year growth. In this post, we will recommend our 9 best blockchain online courses. Let’s take a look! Udemy Udemy offers users one of the most comprehensive blockchain learning sources. The target audience is people who have heard a little bit about the latest developments in this field, but want to understand more. This online course can help you to fully understand how the blockchain works, as well as get to grips with all that surrounds it. Udemy breaks down the course into several less complicated units, allowing you to figure out this complex system rather easily. It costs $19.99, but you can probably get it with a 40% discount. The one downside, however, is that content quality in terms of subject scope can vary depending on the instructor, but user reviews are a good way to gauge quality. Each tutorial lasts approximately 30 minutes, but it also depends on your own tempo and style of work. Pluralsight Pluralsight is an excellent beginner-level blockchain course. It comes in three versions: Blockchain Fundamentals, Surveying Blockchain Technologies for Enterprise, and Introduction to Bitcoin and Decentralized Technology. Course duration varies from 80 to 200 minutes depending on the package. The price of Pluralsight is $29 a month or $299 a year. Choosing one of these options, you are granted access to the entire library of documents, including course discussions, learning paths, channels, skill assessments, and other similar tools. Packt Publishing Packt Publishing has a wide portfolio of learning products on Blockchain for varying levels of experience in the field from beginners to experts. And what’s even more interesting is that you can choose your learning format from books, ebooks to videos, courses and live courses. Or you could simply subscribe to MAPT, their library to gain access to all products at a reasonable price of $29 monthly and $150 annually.  It offers several books and videos on the leading blockchain technology. You can purchase 5 blockchain titles at a discounted rate of $50. Here’s the list of top blockchain courses offered by Packt Publishing: Exploring Blockchain and Crypto-currencies: You will gain the foundational understanding of blockchain and crypto-currencies through various use-cases. Building Blockchain Projects: In this, you will be able to develop real-time practical DApps with Ethereum and JavaScript. Mastering Blockchain - Second Edition: You can learn about cryptography and cryptocurrencies, so you can build highly secure, decentralized applications and conduct trusted in-app transactions. Hands-On Blockchain with Hyperledger: This book will help you leverage the power of Hyperledger Fabric to develop Blockchain-based distributed ledgers with ease. Learning Blockchain Application Development [video ]: This interactive video will help you learn build smart contracts and DApps on Ethereum. Create Ethereum and Blockchain Applications using Solidity [video ]: This video will help you learn about Ethereum, Solidity, DAO, ICO, Bitcoin, Altcoin, Website Security, Ripple, Litecoin, Smart Contracts, and Apps. Cryptozombies Cryptozombies is an online blockchain course based on gamification elements. The tool teaches you to write smart contracts in Solidity through building your own crypto-collectibles game. It is entirely Ethereum-focused, but you don’t need any previous experience to understand how Solidity works. There is a step by step guide that explains to you even the smallest details, so you can quickly learn to create your own fully-functional blockchain-based game. The best thing about Cryptozombies is that you can test it for free and give up in case you don’t like it. Coursera The blockchain is the epicenter of the cryptocurrency world, so it’s necessary to study it if you want to deal with Bitcoin and other digital currencies. Coursera is the leading online resource in the field of virtual currencies, so you might want to check it out. After this course like Blockchain Specialization, you’ll know everything you need to be able to separate fact from fiction when reading claims about Bitcoin and other cryptocurrencies. You’ll have the conceptual foundations you need to engineer to secure software that interacts with the Bitcoin network. And you’ll be able to integrate ideas from Bitcoin in your own projects. The course is a 4-part course spanning a duration 4 weeks, but you can take each part separately. The price depends on the level and features you choose. LinkedIn Learning (formerly known as Lynda) LinkedIn Learning (what used to be Lynda) doesn't offer a specific blockchain course, but it does have a wide range of industry-related learning sources. A search for ‘blockchain’ will present you with almost 100 relevant video courses. You can find all sorts of lessons here, from beginner to expert levels. Lynda allows you to customize selection according to video duration, authors, software, subjects, etc. You can access the library for $15 a month. B9Lab B9Lab ETH-25 Certified Online Ethereum Developer Course is another course that promotes blockchain technology aimed at the Ethereum platform. It’s a 12-week in-depth learning solution that targets experienced programmers. B9Lab introduces everything there is to know about blockchain and how to build useful applications. Participants are taught about the Ethereum platform, the programming language Solidity, how to use web3 and the Truffle framework, and how to tie everything together. The price is €1450 or about $1700. IBM IBM made a self-paced blockchain course, titled Blockchain Essentials that lasts over two hours. The video lectures and lab in this course help you learn about blockchain for business and explore key use cases that demonstrate how the technology adds value. You can learn how to leverage blockchain benefits, transform your business with the new technology, and transfer assets. Besides that, you get a nice wrap-up and a quiz to test your knowledge upon completion. IBM’s course is free of charge. Khan Academy Khan Academy is the last, but certainly not the least important online course on our list. It gives users a comprehensive overview of blockchain-powered systems, particularly Bitcoin. Using this platform, you can learn more on cryptocurrency transactions, security, proof of work, etc. As an online education platform, Khan Academy won’t cost you a dime. [dropcap]B[/dropcap]lockchain is the groundbreaking technology that opens new boundaries in almost every field of business. It directly influences financial markets, data management, digital security, and a variety of other industries. In this post, we presented 9 best blockchain online courses you should try. These sources can teach you everything there is to know about the blockchain basics. Take some time to check them out and you won’t regret it! Author Bio: Olivia is a passionate blogger who writes on topics of digital marketing, career, and self-development. She constantly tries to learn something new and to share this experience on various websites. Connect with her on Facebook and Twitter. Google introduces Machine Learning courses for AI beginners Microsoft start AI School to teach Machine Learning and Artificial Intelligence.
Read more
  • 0
  • 0
  • 6810

article-image-whatsapp-co-founder-reveals-why-he-left-facebook-called-low-class-by-a-facebook-senior-executive
Sugandha Lahoti
27 Sep 2018
5 min read
Save for later

WhatsApp co-founder reveals why he left Facebook; is called ‘low class’ by a Facebook senior executive

Sugandha Lahoti
27 Sep 2018
5 min read
In an exclusive interview given to Forbes yesterday, WhatsApp co-founder Brian Actons opened up for the first time since he left Facebook 10 months ago, about why he left Facebook and why he regrets selling the company to Facebook, criticizing Facebook’s monetization strategy. In a post titled, “The other side of the story”, head of Facebook Messenger unit, David Marcus responded back to Acton’s allegations. About four years ago, Acton and his co-founder, Jan Koum, sold WhatsApp to Facebook for $22 billion making it Facebook’s largest acquisition till date. Ten months ago, Acton quit Facebook stating, he was looking to work for a non-profit. Then in March, as Facebook was at the receiving end of a public backlash after the Cambridge Analytica Scandal, he sent out a viral tweet: “It is time. #deletefacebook.” with no explanation. Pretty soon, momentum gathered behind the #DeleteFacebook campaign, with several media outlets publishing guides on how to permanently delete Facebook accounts. In April, the other Whatsapp co-founder Jan Koum also left Facebook due to the difference of opinion over data privacy and the messaging app’s business model, according to a report from The Washington Post. What Acton has said in the Forbes Interview After several months of this tweet, Acton has now opened up about why he left Facebook and his regrets over selling WhatsApp to Facebook. He also talks about his disagreements with Mark Zuckerberg. In an interview with Forbes, he seems to feel remorse for his actions, stating “I sold my users’ privacy to a larger benefit. I made a choice and a compromise. And I live with that every day.” He further states that Facebook “isn’t the bad guy, just very good businesspeople.” Facebook had tried to put a nondisclosure agreement in place, as part of his proposed settlement in the end. Acton says, “That was part of the reason that I got sort of cold feet in terms of trying to settle with these guys. It was like, okay, well, you want to do these things I don’t want to do,” Acton says. “It’s better if I get out of your way. And I did.” He refused to take the deal and lost $800 million in the process. Acton also says that he was never able to develop a rapport with Zuckerberg. “I couldn’t tell you much about the guy,” he says. Acton was unhappy with Facebook’s monetization strategy. Facebook wanted WhatsApp to make money via showing targeted ads in WhatsApp’s Status feature and also selling businesses (and later analytics) tools to chat with WhatsApp users. Both these proposals were criticized by Acton who felt it “broke a social compact with its users.” His vision for Whatsapp being “No ads, no games, no gimmicks”. David Marcus’ ‘The other side of the story’ Acton’s interview has not gone down well with many in Facebook evident from the strong emotional response applauding Marcus’ post from those within Facebook. David Marcus, one of Facebook’s high-level executives in the Facebook messaging unit, has publicly responded stating that the “interview of Brian Acton that contained statements, and recollection of events that differ greatly from the reality I witnessed first-hand.” However, he did clarify early on that this is his personal view and not that of Facebook’s and that he has not been asked by anyone at Facebook to post his account of what happened. He says that Mark Zuckerberg has always supported founders and their teams much better than others even at a cost to the company. Per his post, “WhatsApp founders requested a completely different office layout when their team moved on campus. Much larger desks and personal space, a policy of not speaking out loud in the space, and conference rooms made unavailable to fellow Facebookers nearby. This irritated people at Facebook, but Mark personally supported and defended it.” He also called Acton’s claims that Facebook wanted WhatsApp to become a money-minting app, dubious.  End-to-end encryption on WhatsApp happened after the acquisition. Jan Koum played a key role in convincing Mark of the importance of encryption. Once with Mark’s full support, whenever the encryption proposal faced backlash, it was defended by him—never for advertising or data collection but about concerns for ‘safety’. Mark’s view was that WhatsApp was a private messaging app, and encryption helped ensure that people’s messages were truly private. Marcus also accuses Acton of “slow-playing development” while advocating for business messaging. “If you have internal questions about it, then work hard to prove that your approach has legs and demonstrate the value. Don’t be passive-aggressive about it,” he added. He also completely demolished Acton’s statements by saying that he finds Acton’s actions a new standard of low-class. “I find attacking the people and company that made you a billionaire, and went to an unprecedented extent to shield and accommodate you for years, low-class.” He also took a hidden jibe at other tech companies saying, “Facebook is truly the only company that’s singularly about people. Not about selling devices. Not about delivering goods with less friction. Not about entertaining you. Not about helping you find information. Just about people.”. This part of his perspective has been criticized by the public most. People, in general, had varied views about Marcus’ post. While some agreed Others sided with Acton. Source: Facebook As of now, there is no official response from Facebook. How far will Facebook go to fix what it broke: Democracy, Trust, Reality. The White House is reportedly launching an antitrust investigation against social media companies. Mark Zuckerberg publishes Facebook manifesto for safeguarding against political interference.
Read more
  • 0
  • 0
  • 22661
Modal Close icon
Modal Close icon