Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7011 Articles
article-image-how-change-org-uses-flow-elixirs-library-to-build-concurrent-data-pipelines-that-can-handle-a-trillion-messages
Sugandha Lahoti
22 May 2019
7 min read
Save for later

How Change.org uses Flow, Elixir’s library to build concurrent data pipelines that can handle a trillion messages

Sugandha Lahoti
22 May 2019
7 min read
Last month, at the ElixirConf EU 2019, John Mertens, Principal Engineer at Change.org conducted a session - Lessons From Our First Trillion Messages with Flow for developers interested in using Elixir for building data pipelines in the real-world system. For many Elixir converts, the attraction of Elixir is rooted in the promise of the BEAM concurrency model. The Flow library has made it easy to build concurrent data pipelines utilizing the BEAM (Originally BEAM was short for Bogdan's Erlang Abstract Machine, named after Bogumil "Bogdan" Hausman, who wrote the original version, but the name may also be referred to as Björn's Erlang Abstract Machine, after Björn Gustavsson, who wrote and maintains the current version). The problem is, that while the docs are great, there are not many resources on running Flow-based systems in production. In his talk, John shares some lessons his team learned from processing their first trillion messages through Flow. Using Flow at Change.org Change.org is a platform for social change where people from all over the world come to start movements on all topics and of all sizes. Technologically, change.org is primarily built in Ruby and JavaScript but they started using Elixir in early 2018 to build a high volume mission-critical data processing pipeline. They used Elixir for building this new system because of its library, Flow. Flow is a library for computational parallel flows in Elixir. It is built on top of GenStage.  GenStage is “specification and computational flow for Elixir”, meaning it provides a way for developers to define a pipeline of work to be carried out by independent steps (or stages) in separate processes. Flow allows developers to express computations on collections, similar to the Enum and Stream modules, although computations will be executed in parallel using multiple GenStages. At Change.org, the developers built some proofs of concept and a few different languages and put them against each other with the two main criteria being performance and developer happiness. Elixir came out as the clear winner. Whenever on change.org an event gets added to a queue, their elixir system pulls these messages off the queue, then prep and transforms them to some business logic and generate some side effects. Next, depending on a few parameters, the messages are either passed on to another system, discarded or retried. So far things have gone smoothly for them, which brought John to discuss lessons from processing their first trillion messages with Flow. Lesson 1: Let Flow do the work Flow and GenStage both are great libraries which provide a few game-changing features by default. The first being parallelism. Parallelism is beneficial for large-scale data processing pipelines and Elixir flows abstractions make utilizing Parallelism easier. It is as easy as writing code that looks essentially like standard Elixir pipeline but that utilizes all of your CPU cores. The second feature of Flow is Backpressure. GenStage specifies how Elixir processes should communicate with back-pressure. Simply put, Backpressure is when your system asks for more data to process instead of data being pushed on to it. With Flow your data processing is in charge of requesting more events. This means that if your process is dependent on some other service and that service becomes slow your whole flow just slows down accordingly and no service gets overloaded with requests, making the whole system stay up. Lesson 2: Organize your Flow The next lesson is on how to set up your code to take advantage of Flow. These organizational tactics help Change.org keep their Flow system manageable in practice. Keep the Flow Simple The golden rule according to John, is to keep your flow simple. Start simple and then increase the complexity depending on the constraints of your system. He discusses a quote from the Flow Docs, which states that: [box type="shadow" align="" class="" width=""]If you can solve a problem without using partition at all, that is preferred. Those are typically called embarrassingly parallel problems.[/box] If you can shape your problem into an embarrassingly parallel problem, he says, flow can really shine. Know your code and your system He also advises that developers should know their code and understand their systems. He then proceeds to give an example of how SQS is used in Flow. Amazon SQS (Simple Queue System) is a message-queuing system (also used at Change. org) that allows you to write distributed applications by exposing a message pipeline that can be processed in the background by workers. It’s two main features are the visibility window and acknowledgments. In acknowledgments, when you pull a message off a queue you have a set amount of time to acknowledge that you've received and processed that message and that amount of time is called the visibility window and that's configurable. If you don't acknowledge the message within the visibility window, it goes back into the queue. If a message is pulled and not acknowledged, a configured number of times then it is either discarded or sent to a dead letter queue. He then proceeds to use an example of a Flow they use in production. Maintain a consistent data structure You should also use a consistent data structure or a token throughout the data pipeline. The data structure most essential to their flow at Change.org is message struct -  %Message{}. When a message comes in from SQS, they create a message struct based on it. The consistency of having the same data structure at every step and the flow is how they can keep their system simple. He then explains an example code on how they can handle different types of data while keeping the flow simple. Isolate the side effects The next organizational tactic that helps Change.org employ to keep their Flow system manageable in practice is to isolate the side effects. Side effects are mutations; if something goes wrong in a mutation you need to be able to roll it back. In the spirit of keeping the flow simple, at Change.org, they batch all the side-effects together and put them at the ends so that a nothing gets lost if they need to roll it back. However, there are certain cases where you can’t put all side effects together and need a different strategy. These cases can be handled using Flow Sagas. Sagas pattern is a way to handle long live transactions providing rollback instructions for each step along the way so in case it goes bad it can just run that function. There is also an elixir implementation of sagas called Sage. Lesson 3: Tune the flow How you optimize your Flow is dependent upon the shape of your problem. This means tailoring the Flow to your own use case to squeeze all the throughput. However, there are three things which you can do to shape your Flow. This includes measuring flow performance, what are the things that we can actually do to tune it and then how can we help outside of the Flow. Apart from the three main lessons on data processing through Flow, John also mentions a few others, namely Graceful Producer Shutdowns Flow-level integration tests Complex batching Finally, John gave a glimpse of Broadway from change.org’s codebase. Broadway allows developers to build concurrent and multi-stage data ingestion and data processing pipelines with Elixir. It takes the burden of defining concurrent GenStage topologies and provides a simple configuration API that automatically defines concurrent producers, concurrent processing, batch handling, and more, leading to both time and cost efficient ingestion and processing of data. Some of its features include back-pressure automatic acknowledgments at the end of the pipeline, batching, automatic restarts in case of failures, graceful shutdown, built-in testing, and partitioning. José Valim’s keynote at the ElixirConf2019 also talked about streamlining data processing pipelines using Broadway. You can watch the full video of John Mertens’ talk here. John is the principal scientist at Change.org using Elixir to empower social action in his organization. Why Ruby developers like Elixir Introducing Mint, a new HTTP client for Elixir Developer community mourns the loss of Joe Armstrong, co-creator of Erlang
Read more
  • 0
  • 0
  • 12439

article-image-getting-started-with-designing-restful-apis
Vincy Davis
21 May 2019
9 min read
Save for later

Getting started with designing RESTful APIs

Vincy Davis
21 May 2019
9 min read
The application programmable interface (API) is one of the most promising software paradigms to address anything, anytime, anywhere, and any device, which is the one substantial need of the digital world at the moment. This article discusses how APIs and API designs help to address those challenges and bridge the gaps. It discusses a few essential API design guidelines, such as consistency, standardization, re-usability, and accessibility through REST interfaces, which could equip API designers with better thought processes for their API modeling. This article is an excerpt taken from the book, 'Hands-On RESTful API Design Patterns and Best Practices' written by Harihara Subramanian and Pethura Raj. In this article, you will understand the various design rules of RESTful APIs including the use of Uniform Resource Identifiers, URI authority, Resource modelling and many more. Goals of RESTful API design APIs are straightforward, unambiguous, easy to consume, well-structured, and most importantly accessible with well-known and standardized HTTP methods. They are one of the best possible solutions for resolving many digitization challenges out of the box. The following are the basic API design goals: Affordance Loosely coupled Leverage existing web architecture RESTful API design rules The best practices and design principles are guidelines that API designers try to incorporate in their API design. So for making the API design RESTFUL, certain rules are followed such as  the following: Use of Uniform Resource Identifiers URI authority Resource modelling Resource archetypes URI path URI query Metadata design rules (HTTP headers and returning error codes) and representations It will be easier to design and deliver the finest RESTful APIs, if we understand these design rules. Uniform Resource Identifiers REST APIs should use Uniform Resource Identifiers (URIs) to represent their resources. Their indications should be clear and straightforward so that they communicate the APIs resources crisp and clearly: A sample of a simple to understand URI is https://xx.yy.zz/sevenwonders/tajmahal/india/agra, as you may observe that the emphasized texts clearly indicates the intention or representation A harder to understand URI is https://xx.yy.zz/books/36048/9780385490627; in this sample, the text after books is very hard for anyone to understand So having simple, understandable representation in the URI is critical in RESTful API design. URI formats The syntax of the generic URI is scheme "://" authority "/" path [ "?" query ] [ "#" fragment ] and following are the rules for API designs: Use forward slash (/) separator Don't use a trailing forward slash Use hyphens (-) Avoid underscores (_) Prefer all lowercase letters in a URI path Do not include file extensions REST API URI authority As we've seen different rules for URIs in general, let's look at the authority (scheme "://" authority "/" path [ "?" query ] [ "#" fragment ]) portion of the REST API URI: Use consistent sub-domain names: As you see in http://api.baseball.restfulapi.org, the API domain should have api as part of its sub-domainConsistent sub-domain names for an API include the following: The top-level domain and the first sub-domain names indicate the service owner and an example could be baseball.restfulapi.org Consistent sub-domain names for a developer portal  include the following: As we saw in the API playgrounds section, the API providers should have exposed sites for APP developers to test their APIs called a developer portal. So, by convention, the developer portal's sub-domain should have developer in it. An example of a sub-domain with the developer for a developer portal would be http://developer.baseball.restfulapi.org. Resource modelling Resource modeling is one of the primary aspects for API designers as it helps to establish the APIs fundamental concepts. In general, the URI path always convey REST resources, and each part of the URI is separated by a forward slash (/) to indicate a unique resource within it model's hierarchy. Each resource separated by a forward slash indicates an addressable resource, as follows: https://api-test.lufthansa.com/v1/profiles/customers https://api-test.lufthansa.com/v1/profiles https://api-test.lufthansa.com Customers, profiles, and APIs are all unique resources in the preceding individual URI models. So, resource modelling is a crucial design aspect before designing URI paths. Resource archetypes Each service provided by the API is an archetype, and they indicate the structures and behaviors of REST API designs. Resource modelling should start with a few fundamental resource archetypes, and usually, the REST API is composed of four unique archetypes, as follows: Document Collection Stores Controller URI path This section discusses rules relating to the design of meaningful URI paths (scheme "://" authority "/" path [ "?" query ] [ "#" fragment ]) portion of the REST API URIs. The following are the rules about URI paths: Use singular nouns for document names, for example, https://api-test.lufthansa.com/v1/profiles/customers/memberstatus. Use plural nouns for collections and stores: Collections: https://api-test.lufthansa.com/v1/profiles/customers Stores: https://api-test.lufthansa.com/v1/profiles/customers/memberstatus/prefernces As controller names represent an action, use a verb or verb phrase for controller resources. An example would be https://api-test.lufthansa.com/v1/profiles/customers/memberstatus/reset Do not use CRUD function names in URIs: Correct URI example: DELETE /users/1234 Incorrect URIs: DELETE /user-delete /1234, and POST /users/1234/delete URI query These are the rules relating to the design of the query (scheme "://" authority "/" path [ "?" query ] [ "#" fragment ]) portion of the REST API URIs. The query component of the URI also represents the unique identification of the resource, and following are the rules about URI queries: Use the query to filter collections or stores: An example of the limit in the query: https://api.lufthansa.com/v1/operations/flightstatus/arrivals/ZRH/2018-05-21T06:30?limit=40 Use the query to paginate collection or store results: An example with the offset in the query: https://api.lufthansa.com/v1/operations/flightstatus/arrivals/ZRH/2018-05-21T06:30?limit=40&offset=10 HTTP interactions A REST API doesn't suggest any special transport layer mechanisms, and all it needs is basic Hyper Text Transfer Protocol and its methods to represent its resources over the web. We will touch upon how REST should utilize those basic HTTP methods in the upcoming sections. Request methods The client specifies the intended interaction with well-defined semantic HTTP methods, such as GET, POST, PUT, DELETE, PATCH, HEAD, and OPTIONS. The following are the rules that an API designer should take into account when planning their design: Don't tunnel to other requests with the GET and POST methods Use the GET method to retrieve a representation of a resource Use the HEAD method to retrieve response headers Use the PUT method to update and insert a stored resource Use the PUT method to update mutable resources Use the POST method to create a new resource in a collection Use the POST method for controller's execution Use the DELETE method to remove a resource from its parent Use the OPTIONS method to retrieve metadata Response status codes HTTP specification defines standard status codes, and REST API can use the same status codes to deliver the results of a client request. The status code categories and a few associated REST API rules are as follows so that the APIs can apply those rules according to the process status: 1xx: Informational: This provides protocol-level information 2xx: Success: Client requests are accepted (successfully), as in the following examples: 200: OK 201: Created 202: Accepted 204: No content 3xx: Redirection: Client requests are redirected by the server to the different endpoints to fulfill the client request: 301: Moved Permanently 302: Found 303: See other 304: Not modified 307: Temporarily redirect 4xx: Client error: Errors at client side: 400: Bad request 401: Unauthorized 403: Forbidden 404: Not found 405: Method not allowed 406: Not acceptable 409: Conflict 412: Precondition failed 415: Unsupported media type 5xx: Server error: These relate to errors at server side: 500: Internal server error Metadata design It looks at the rules for metadata designs, including HTTP headers and media types HTTP headers HTTP specifications have a set of standard headers, through which a client can get information about a requested resource, and carry the messages that indicate its representations and may serve as directives to control intermediary caches. The following points suggest a few sets of rules conforming to the HTTP standard headers: Should use content-type Should use content-length Should use last-modified in responses Should use ETag in responses Stores must support conditional PUT request Should use the location to specify the URI of newly created resources (through PUT) Should leverage HTTP cache headers Should use expiration headers with 200 ("OK") responses May use expiration caching headers with 3xx and 4xx responses Mustn't use custom HTTP headers Media types and media type design rules Media types help to identify the form of the data in a request or response message body, and the content-type header value represents a media type also known as the Multipurpose Internet Mail Extensions (MIME) type. Media type design influences many aspects of a REST API design, including hypermedia, opaque URIs, and different and descriptive media types so that app developers or clients can rely on the self-descriptive features of the REST API. The following are the two rules of media type design: Uses application-specific media types Supports media type negotiations in case of multiple representations Thus we can say that: Support media type selection using a query parameter: To support clients with simple links and debugging, REST APIs should support media type selection through a query parameter named accept, with a value format that mirrors that of the accept HTTP request header An example is REST APIs should prefer a more precise and generic approach as following media type, using the GET https://swapi.co/api/planets/1/?format=json query parameter identification over the other alternatives Summary We have briefly discussed the goals of RESTful API design and how API designers need to follow design principles and rules so that you can create better RESTful APIs. To know more about the rules for most common resource formats, such as JSON and hypermedia, and error types, in brief, client concerns, head over to the book, 'Hands-On RESTful API Design Patterns and Best Practices'. Svelte 3 releases with reactivity through language instead of an API Get to know ASP.NET Core Web API [Tutorial] Implement an API Design-first approach for building APIs [Tutorial]
Read more
  • 0
  • 0
  • 6507

article-image-approx-250-public-network-users-affected-during-stack-overflows-security-attack
Vincy Davis
20 May 2019
4 min read
Save for later

Approx. 250 public network users affected during Stack Overflow's security attack

Vincy Davis
20 May 2019
4 min read
In a security update released on May 16, StackOverflow confirmed that “some level of their production access was gained on May 11”. In a recent “Update to Security Incident” post, Stack Overflow provides further details of the security attack including the actual date and duration of the attack, how the attack took place, and the company’s response to this incident. According to the update, the first intrusion happened on May 5 when a build deployed for the development tier for stackoverflow.com contained a bug. This allowed the attacker to log in to their development tier as well as escalate its access on the production version of stackoverflow.com. From May 5 onwards, the intruder took time to explore the website until May 11. Post which the intruder made changes in the Stack Overflow system to obtain a privileged access on production. This change was identified by the Stack Overflow team and led to immediately revoking their network-wide access and also initiating an investigation on the intrusion. As part of their security procedure to protect sensitive customer data, Stack Overflow maintains separate infrastructure and network for their clients of Teams, Business, and Enterprise products. They have not found any evidence to these systems or customer data being accessed. The Advertising and Talent businesses of Stack Overflow were also not impacted. However, the team has identified some privileged web request that the attacker had made, which might have returned an IP address, names, or emails of approximately 250 public network users of Stack Exchange. These affected users will be notified by Stack Overflow. Steps taken by Stack Overflow in response to the attack Terminated the unauthorized access to the system. Conducted an extensive and detailed audit of all logs and databases that they maintain, which allowed them to trace the steps and actions that were taken. Remediated the original issues that allowed unauthorized access and escalation. Issued a public statement proactively. Engaged third-party forensics and incident response firm to assist with both remediation and learnings of Stack Overflow. Have taken precautionary measures such as cycling secrets, resetting company passwords, and evaluating systems and security levels. Stack Overflow has again promised to provide more public information after their investigation cycle concludes. Many developers are appreciating the quick confirmation, updates and the response taken by Stack Overflow in this security attack incident. https://twitter.com/PeterZaitsev/status/1129542169696657408 A user on Hacker news comments, “I think this is one of the best sets of responses to a security incident I've seen: Disclose the incident ASAP, even before all facts are known. The disclosure doesn't need to have any action items, and in this case, didn't Add more details as investigation proceeds, even before it fully finishes to help clarify scope The proactive communication and transparency could have downsides (causing undue panic), but I think these posts have presented a sense that they have it mostly under control. Of course, this is only possible because they, unlike some other companies, probably do have a good security team who caught this early. I expect the next (or perhaps the 4th) post will be a fuller post-mortem from after the incident. This series of disclosures has given me more confidence in Stack Overflow than I had before!” Another user on Hacker News added, “Stack Overflow seems to be following a very responsible incident response procedure, perhaps instituted by their new VP of Engineering (the author of the OP). It is nice to see.” Read More 2019 Stack Overflow survey: A quick overview Bryan Cantrill on the changing ethical dilemmas in Software Engineering Listen to Uber engineer Yuri Shkuro discuss distributed tracing and observability [Podcast]
Read more
  • 0
  • 0
  • 36037

article-image-understanding-advanced-patterns-in-restful-api-tutorial
Vincy Davis
20 May 2019
11 min read
Save for later

Understanding advanced patterns in RESTful API [Tutorial]

Vincy Davis
20 May 2019
11 min read
Every software designer agrees that design patterns, and solving familiar yet recurring design problems by implementing design patterns, are inevitable in the modern software design-and-development life cycle. These advanced patterns will help developers with the best-possible RESTful services implementation. This article is an excerpt taken from the book, 'Hands-On RESTful API Design Patterns and Best Practices' written by Harihara Subramanian and Pethura Raj. In this book, design strategy, essential and advanced Restful API Patterns, Legacy Modernization to Micro services-centric apps are covered. This article will help you understand the advanced patterns in RESTful API including Versioning, Authorization, Uniform contract, Entity endpoints, and many more. Versioning The general rules of thumb we'd like to follow when versioning APIs are as follows: Upgrade the API to a new major version when the new implementation breaks the existing customer implementations Upgrade the API to a new minor version of the API when the new implementation provides enhancements and bug fixes; however, ensure that the implementation takes care of backward-compatibility and has no impact on the existing customer implementations There are four different ways that we can implement versioning in our API: Versioning through the URI path The major and minor version changes can be a part of the URI, for example, to represent v1 or v2 of the API the URI can be http://localhost:9090/v1/investors or http://localhost:9090/v2/investors, respectively. The URI path versioning is a popular way of managing API versions due to its simple implementation. Versioning through query parameters The other simple method for implementing the version reference is to make it part of the request parameters, as we see in the following examples—http://localhost:9090/investors?version=1, http://localhost:9090/investors?version=2.1.0: @GetMapping("/investors") public List<Investor> fetchAllInvestorsForGivenVersionAsParameter( @RequestParam("version") String version) throws VersionNotSupportedException { if (!(version.equals("1.1") || version.equals("1.0"))) { throw new VersionNotSupportedException("version " + version); } return investorService.fetchAllInvestors(); } Versioning through custom headers A custom header allows the client to maintain the same URIs, regardless of any version upgrades. The following code snippet will help us understand the version implementation through a custom header named x-resource-version. Note that the custom header name can be any name; in our example, we name it x-resource-version: @GetMapping("/investorsbycustomheaderversion") public List<Investor> fetchAllInvestors...( @RequestHeader("x-resource-version") String version) throws VersionNotSupportedException { return getResultsAccordingToVersion(version); } Versioning through content-negotiation Providing the version information through the Accept (request) header along with the content-type (media) in response is the preferred way as this helps to version APIs without any impact on the URI. This is done by a code implementation of versioning through Accept and Content-Type: @GetMapping(value = "/investorsbyacceptheader", headers = "Accept=application/investors-v1+json, application/investors-v1.1+json") public List<Investor> fetchAllInvestorsForGiven..() throws VersionNotSupportedException { return getResultsAccordingToVersion("1.1"); } The right versioning is determined on a case-by-case basis. However, the content-negotiation and custom headers are a proponent of RESTful-compliant services. Authorization How do we ensure our REST API implementation is accessible only to genuine users and not to everyone? In our example, the investor's list should not be visible to all users, and the stocks URI should not be exposed to anyone other than the legitimate investor.  Here we are implementing simple basic authentication through the authorization header. The basic authentication is a standard HTTP header (RESTful API constraint compliant) with the user's credentials encoded in Base64. The credentials (username and password) are encoded in the format of username—password. The credentials are encoded not encrypted, and it's vulnerable to specific security attacks, so it's inevitable that the rest API implementing basic authentication will communicate over SSL (https). Authorization with the default key Securing the REST API with basic authentication is exceptionally simplified by the Spring security framework. Merely adding the following entries in pom.xml provides basic authentication to our investor service app: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> Now rebuild (mvn clean package) the application and restart it. It's time to test our APIs with the postman tool. When we hit the URL, unlike our earlier examples, we'll see an error complaining Full authorization required to access this resource The preceding error is due to the addition of spring-security into our pom.xml file. We can access the REST API by observing a text using the default security password or search for it in our log file. That's the key for anyone to access our API. We need to provide BasicAuth as the Authorization header for the API that we are accessing; we will see the results now without any authentication errors. Also the Authorization header that carries the XYZKL... token prefixed with Basic, as we use the HTTP Authentication header to enforce REST API authentication. Authorization with credentials In many real-time situations, we need to use specific credentials to access the API and not the default one; in such cases, we can enhance our investor service application and secure it with our custom credentials by using few additional out-of-the-box spring modules. In our investor service, we will have a new class, called PatronAuthConfig.java, which helps the app to enforce the credentials to the URLs that we would like to secure: @Configuration @EnableWebSecurity public class PatronsAuthConfig extends WebSecurityConfigurerAdapter { ..... We can implement the security with a few annotations. Uniform contract Services will always evolve with additional capabilities, enhancements, and defects fixes, however, now a service consumer can consume the latest version of our services without the need to keep changing their implementation or REST API endpoints. The uniform contract pattern comes to the rescue to overcome the problems. The pattern suggests the following measures: Standardize the service contract and make it uniform across any service endpoints Abstract the service endpoints from individual services capabilities Follow the REST principles where the endpoints use only HTTP verbs, and express the underlying resources executable actions only with HTTP verbs Entity endpoints If service clients want to interact with entities, such as investors, and their stocks without needing them to manage a compound identifier for both investor and stock, we need a pattern called entity endpoint. Entity endpoints suggest exposing each entity as individual lightweight endpoints of the service they reside in, so the service consumers get the global addressability of service entities The entity endpoints expose reusable enterprise resources, so service consumers can reuse and share the entity resources. The investor service, exposes a couple of entity endpoints, such as /investors/investorId, and investor/stockId , and they are few examples of entity endpoints that service consumer can reuse and standardize. Endpoint redirection Changing service endpoints isn't always ideal, However, if it needs to, will the service client know about it and use the new endpoint? Yes, with standard HTTP return codes, 3xx, and with the Location header, then by receiving 301 Moved permanently or 307 Temporary Redirect, the service client can act accordingly. The endpoint redirection pattern suggests returning standard HTTP headers and provides an automatic reference of stale endpoints to the current endpoints. The service consumers may call the new endpoints that are found in the Location header. Idempotent Imagine a bank's debit API failed immediately after deducting some amount from the client account. However, the client doesn't know about it and reissues the call to debit! Alas, the client loses money. So how can a service implementation handle messages/data and produce the same results, even after multiple calls? Idempotent is one of the fundamental resilience and scalable patterns, as it decouples the service implementation nodes across distributed systems. Whether dealing with data or messages, the services should always have designed for sticking to Idempotent in nature. There is a simple solution: use the idempotent capabilities of the HTTP web APIs, whereby services can provide a guarantee that any number of repeated calls due to intermittent failures of communication to the service is safe, and process those multiple calls from the server without any side effects. Bulk operation Marking a list of emails as read in our email client could be an example of a bulk operation; the customer chooses more than one email to tag as Read, and one REST API call does the job instead of multiple calls to an underlying API. The following two approaches are suggested for implementing bulk operations: Content-based bulk operation Custom-header action-identifier-based bulk operation The bulk operations may involve many other aspects, such as E-tag, asynchronous executions, or parallel-stream implementation to make it effective. Circuit breaker The circuit breaker is an automatic switch designed to protect entire electrical circuits from damage due to excess current load as a result of a short circuit or overload. The same concept applies when services interact with many other services. Failure due to any issue can potentially create catastrophic effects across the application, and preventing cascading impacts is the sole aim of a circuit-breaker pattern. Hence, this pattern helps subsystems to fail gracefully and also prevents complete system failure as a result of a subsystem failures. There are three different states that constitute the circuit breaker: Closed Open Half-open There's a new service called circuit-breaker-service-consumer, which will have all the necessary circuit-breaker implementations, along with a call to our first service. Combining the circuit pattern and the retry pattern As software designers, we understand the importance of gracefully handling application failures and failure operations. We may achieve better results by combining the retry pattern and the circuit breaker pattern as it provides the application with greater flexibility in handling failures. The retry patterns enable the application to retry failed operations, expecting those operations to become operational and eventually succeed. However, it may result in a denial of service (DoS) attack within our application. API facade API facade abstracts the complex subsystem from the callers and exposes only necessary details as interfaces to the end user. The client can call one API facade to make it simpler and more meaningful in cases where the clients need multiple service calls. However, that can be implemented with a single API endpoint instead of the client calling multiple endpoints. The API facades provide high scalability and high performance as well. The investor services have implemented a simple API facade implementation for its delete operations. As we saw earlier, the delete methods call the design for intent methods. However, we have made the design for the intent method abstract to the caller by introducing a simple interface to our investor services. That brings the facade to our API. The interface for the delete service is shown as follows: public interface DeleteServiceFacade { boolean deleteAStock(String investorId, String stockTobeDeletedSymbol); boolean deleteStocksInBulk(String investorId, List<String> stocksSymbolsList); } Backend for frontend Backend for frontend (BFF) is a pattern first described by Sam Newman; it helps to bridge any API design gaps. BFF suggests introducing a layer between the user experience and the resources it calls. It also helps API designers to avoid customizing a single backend for multiple interfaces. Each interface can define its necessary and unique requirements that cater to frontend requirements without worrying about impacting other frontend implementations. BFF may not fit in cases such as multiple interfaces making the same requests to the backend, or using only one interface to interact with the backend services. So caution should be exercised when deciding on separate, exclusive APIs/interfaces, as it warrants additional and lifelong maintenance, security improvement within layers, additional customized designs that lead to lapses in security, and defect leaks. Summary In this article, we have discussed versioning our APIs, securing APIs with authorization, and enabling the service clients with uniform contract, entity endpoint, and endpoint redirection implementations. We also learned about Idempotent and its importance, which powers APIs with bulk operations. Having covered various advanced patterns, we concluded the article with the circuit breaker and the BFF pattern. These advanced pattern's of restful API's will provide our customers and app developers with the best-possible RESTful services implementation. To know more about the rules for most common resource formats, such as JSON and hypermedia, and error types, in brief, client concerns, head over to the book, 'Hands-On RESTful API Design Patterns and Best Practices'. Inspecting APIs in ASP.NET Core [Tutorial] Google announces the general availability of a new API for Google Docs The tug of war between Google and Oracle over API copyright issue has the future of software development in the crossfires
Read more
  • 0
  • 0
  • 22430

article-image-implementing-routing-with-react-router-and-graphql-tutorial
Bhagyashree R
19 May 2019
15 min read
Save for later

Implementing routing with React Router and GraphQL [Tutorial]

Bhagyashree R
19 May 2019
15 min read
Routing is essential to most web applications. You cannot cover all of the features of your application in just one page. It would be overloaded, and your user would find it difficult to understand. Sharing links to pictures, profiles, or posts is also very important for a social network such as Graphbook. It is also crucial to split content into different pages, due to search engine optimization (SEO). This article is taken from the book Hands-on Full-Stack Web Development with GraphQL and React by Sebastian Grebe. This book will guide you in implementing applications by using React, Apollo, Node.js, and SQL. By the end of the book, you will be proficient in using GraphQL and React for your full-stack development requirements. To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we will learn how to do client-side routing in a React application. We will cover the installation of React Router, implement routes, create user profiles with GraphQL backend, and handle manual navigation. Installing React Router We will first start by installing and configuring React Router 4 by running npm: npm install --save react-router-dom From the package name, you might assume that this is not the main package for React. The reason for this is that React Router is a multi-package library. That comes in handy when using the same tool for multiple platforms. The core package is called react-router. There are two further packages. The first one is the react-router-dom package, which we installed in the preceding code, and the second one is the react-router-native package. If at some point, you plan to build a React Native app, you can use the same routing, instead of using the browser's DOM for a real mobile app. The first step that we will take introduces a simple router to get our current application working, including different paths for all of the screens. There is one thing that we have to prepare before continuing. For development, we are using the webpack development server. To get the routing working out of the box, we will add two parameters to the webpack.client.config.js file. The devServer field should look as follows: devServer: { port: 3000, open: true, historyApiFallback: true, }, The historyApiFallback field tells the devServer to serve the index.html file, not only for the root path, http://localhost:3000/ but also when it would typically receive a 404 error. That happens when the path does not match a file or folder that is normal when implementing routing. The output field at the top of the config file must have a publicPath property, as follows: output: { path: path.join(__dirname, buildDirectory), filename: 'bundle.js', publicPath: '/', }, The publicPath property tells webpack to prefix the bundle URL to an absolute path, instead of a relative path. When this property is not included, the browser cannot download the bundle when visiting the sub-directories of our application, as we are implementing client-side routing. Implementing your first route Before implementing the routing, we will clean up the App.js file. Create a Main.js file next to the App.js file in the client folder. Insert the following code: import React, { Component } from 'react'; import Feed from './Feed'; import Chats from './Chats'; import Bar from './components/bar'; import CurrentUserQuery from './components/queries/currentUser'; export default class Main extends Component { render() { return ( <CurrentUserQuery> <Bar changeLoginState={this.props.changeLoginState}/> <Feed /> <Chats /> </CurrentUserQuery> ); }} As you might have noticed, the preceding code is pretty much the same as the logged in condition inside the App.js file. The only change is that the changeLoginState function is taken from the properties, and is not directly a method of the component itself. That is because we split this part out of the App.js and put it into a separate file. This improves reusability for other components that we are going to implement. Now, open and replace the render method of the App component to reflect those changes, as follows: render() { return ( <div> <Helmet> <title>Graphbook - Feed</title> <meta name="description" content="Newsfeed of all your friends on Graphbook" /> </Helmet> <Router loggedIn={this.state.loggedIn} changeLoginState= {this.changeLoginState}/> </div> ) } If you compare the preceding method with the old one, you can see that we have inserted a Router component, instead of directly rendering either the posts feed or the login form. The original components of the App.js file are now in the previously created Main.js file. Here, we pass the loggedIn state variable and the changeLoginState function to the Router component. Remove the dependencies at the top, such as the Chats and Feed components, because we won't use them any more thanks to the new Main component. Add the following line to the dependencies of our App.js file: import Router from './router'; To get this code working, we have to implement our custom Router component first. Generally, it is easy to get the routing running with React Router, and you are not required to separate the routing functionality into a separate file, but, that makes it more readable. To do this, create a new router.js file in the client folder, next to the App.js file, with the following content: import React, { Component } from 'react'; import LoginRegisterForm from './components/loginregister'; import Main from './Main'; import { BrowserRouter as Router, Route, Redirect, Switch } from 'react-router-dom'; export default class Routing extends Component { render() { return ( <Router> <Switch> <Route path="/app" component={() => <Main changeLoginState= {this.props.changeLoginState}/>}/> </Switch> </Router> ) }} At the top, we import all of the dependencies. They include the new Main component and the react-router package. The problem with the preceding code is that we are only listening for one route, which is /app. If you are not logged in, there will be many errors that are not covered. The best thing to do would be to redirect the user to the root path, where they can log in. Advanced routing with React Router The primary goal of this article is to build a profile page, similar to Facebook, for your users. We need a separate page to show all of the content that a single user has entered or created. Parameters in routes We have prepared most of the work required to add a new user route. Open up the router.js file again. Add the new route, as follows: <PrivateRoute path="/user/:username" component={props => <User {...props} changeLoginState={this.props.changeLoginState}/>} loggedIn={this.props.loggedIn}/> Those are all of the changes that we need to accept parameterized paths in React Router. We read out the value inside of the new user page component. Before implementing it, we import the dependency at the top of router.js to get the preceding route working: import User from './User'; Create the preceding User.js file next to the Main.js file. Like the Main component, we are collecting all of the components that we render on this page. You should stay with this layout, as you can directly see which main parts each page consists of. The User.js file should look as follows: import React, { Component } from 'react'; import UserProfile from './components/user'; import Chats from './Chats'; import Bar from './components/bar'; import CurrentUserQuery from './components/queries/currentUser'; export default class User extends Component { render() { return ( <CurrentUserQuery> <Bar changeLoginState={this.props.changeLoginState}/> <UserProfile username={this.props.match.params.username}/> <Chats /> </CurrentUserQuery> ); }} We use the CurrentUserQuery component as a wrapper for the Bar component and the Chats component. If a user visits the profile of a friend, they see the common application bar at the top. They can access their chats on the right-hand side, like in Facebook. We removed the Feed component and replaced it with a new UserProfile component. Importantly, the UserProfile receives the username property. Its value is taken from the properties of the User component. These properties were passed over by React Router. If you have a parameter, such as a username, in the routing path, the value is stored in the match.params.username property of the child component. The match object generally contains all matching information of React Router. From this point on, you can implement any custom logic that you want with this value. We will now continue with implementing the profile page. Follow these steps to build the user's profile page: Create a new folder, called user, inside the components folder. Create a new file, called index.js, inside the user folder. Import the dependencies at the top of the file, as follows: import React, { Component } from 'react'; import PostsQuery from '../queries/postsFeed'; import FeedList from '../post/feedlist'; import UserHeader from './header'; import UserQuery from '../queries/userQuery'; The first three lines should look familiar. The last two imported files, however, do not exist at the moment, but we are going to change that shortly. The first new file is UserHeader, which takes care of rendering the avatar image, the name, and information about the user. Logically, we request the data that we will display in this header through a new Apollo query, called UserQuery. Insert the code for the UserProfile component that we are building at the moment beneath the dependencies, as follows: export default class UserProfile extends Component { render() { const query_variables = { page: 0, limit: 10, username: this.props.username }; return ( <div className="user"> <div className="inner"> <UserQuery variables={{username: this.props.username}}> <UserHeader/> </UserQuery> </div> <div className="container"> <PostsQuery variables={query_variables}> <FeedList/> </PostsQuery> </div> </div> ) } } The UserProfile class is not complex. We are running two Apollo queries simultaneously. Both have the variables property set. The PostQuery receives the regular pagination fields, page and limit, but also the username, which initially came from React Router. This property is also handed over to the UserQuery, inside of a variables object. We should now edit and create the Apollo queries, before programming the profile header component. Open the postsFeed.js file from the queries folder. To use the username as input to the GraphQL query we first have to change the query string from the GET_POSTS variable. Change the first two lines to match the following code: query postsFeed($page: Int, $limit: Int, $username: String) { postsFeed(page: $page, limit: $limit, username: $username) { Add a new line to the getVariables method, above the return statement: if(typeof variables.username !== typeof undefined) { query_variables.username = variables.username; } If the custom query component receives a username property, it is included in the GraphQL request. It is used to filter posts by the specific user that we are viewing. Create a new userQuery.js file in the queries folder to create the missing query class. Import all of the dependencies and parse the new query schema with graphl-tag, as follows: import React, { Component } from 'react'; import { Query } from 'react-apollo'; import Loading from '../loading'; import Error from '../error'; import gql from 'graphql-tag'; const GET_USER = gql` query user($username: String!) { user(username: $username) { id email username avatar } }`; The preceding query is nearly the same as the currentUser query. We are going to implement the corresponding user query later, in our GraphQL API. The component itself is as simple as the ones that we created before. Insert the following code: export default class UserQuery extends Component { getVariables() { const { variables } = this.props; var query_variables = {}; if(typeof variables.username !== typeof undefined) { query_variables.username = variables.username; } return query_variables; } render() { const { children } = this.props; const variables = this.getVariables(); return( <Query query={GET_USER} variables={variables}> {({ loading, error, data }) => { if (loading) return <Loading />; if (error) return <Error><p>{error.message}</p></Error>; const { user } = data; return React.Children.map(children, function(child){ return React.cloneElement(child, { user }); }) }} </Query> ) } } We set the query property and the parameters that are collected by the getVariables method to the GraphQL Query component. The rest is the same as any other query component that we have written. All child components receive a new property, called user, which holds all the information about the user, such as their name, their email, and their avatar image. The last step is to implement the UserProfileHeader component. This component renders the user property, with all its values. It is just simple HTML markup. Copy the following code into the header.js file, in the user folder: import React, { Component } from 'react';export default class UserProfileHeader extends Component { render() { const { avatar, email, username } = this.props.user; return ( <div className="profileHeader"> <div className="avatar"> <img src={avatar}/> </div> <div className="information"> <p> {username} </p> <p> {email} </p> <p>You can provide further information here and build your really personal header component for your users.</p> </div> </div> ) }} We have finished the new front end components, but the UserProfile component is still not working. The queries that we are using here either do not accept the username parameter or have not yet been implemented. Querying the user profile With the new profile page, we have to update our back end accordingly. Let's take a look at what needs to be done, as follows: We have to add the username parameter to the schema of the postsFeed query and adjust the resolver function. We have to create the schema and the resolver function for the new UserQuery component. We will begin with the postsFeed query. Edit the postsFeed query in the RootQuery type of the schema.js file to match the following code: postsFeed(page: Int, limit: Int, username: String): PostFeed @auth Here, I have added the username as an optional parameter. Now, head over to the resolvers.js file, and take a look at the corresponding resolver function. Replace the signature of the function to extract the username from the variables, as follows: postsFeed(root, { page, limit, username }, context) { To make use of the new parameter, add the following lines of code above the return statement: if(typeof username !== typeof undefined) { query.include = [{model: User}]; query.where = { '$User.username$': username }; } In the preceding code, we fill the include field of the query object with the Sequelize model that we want to join. This allows us to filter the associated Chats model in the next step. Then, we create a normal where object, in which we write the filter condition. If you want to filter the posts by an associated table of users, you can wrap the model and field names that you want to filter by with dollar signs. In our case, we wrap User.username with dollar signs, which tells Sequelize to query the User model's table and filter by the value of the username column. No adjustments are required for the pagination part. The GraphQL query is now ready. The great thing about the small changes that we have made is that we have just one API function that accepts several parameters, either to display posts on a single user profile, or to display a list of posts like a news feed. Let's move on and implement the new user query. Add the following line to the RootQuery in your GraphQL schema: user(username: String!): User @auth This query only accepts a username, but this time it is a required parameter in the new query. Otherwise, the query would make no sense, since we only use it when visiting a user's profile through their username. In the resolvers.js file, we will now implement the resolver function using Sequelize: user(root, { username }, context) { return User.findOne({ where: { username: username } }); }, In the preceding code, we use the findOne method of the User model by Sequelize, and search for exactly one user with the username that we provided in the parameter. We also want to display the email of the user on the user's profile page. Add the email as a valid field on the User type in your GraphQL schema with the following line of code: email: String With this step, our back end code and the user page are ready. This article walked you through the installation process of React Router and how to implement a route in React. Then we moved on to more advanced stuff by implementing a user profile, similar to Facebook, with a GraphQL backend. If you found this post useful, do check out the book, Hands-on Full-Stack Web Development with GraphQL and React. This book teaches you how to build scalable full-stack applications while learning to solve complex problems with GraphQL. How to build a Relay React App [Tutorial] React vs. Vue: JavaScript framework wars Working with the Vue-router plugin for SPAs
Read more
  • 0
  • 0
  • 42442

article-image-applying-styles-to-material-ui-components-in-react-tutorial
Bhagyashree R
18 May 2019
14 min read
Save for later

Applying styles to Material-UI components in React [Tutorial]

Bhagyashree R
18 May 2019
14 min read
The majority of styles that are applied to Material-UI components are part of the theme styles. In some cases, you need the ability to style individual components without changing the theme. For example, a button in one feature might need a specific style applied to it that shouldn't change every other button in the app. Material-UI provides several ways to apply custom styles to components as a whole, or to specific parts of components. This article is taken from the book React Material-UI Cookbook by Adam Boduch by Adam Boduch.  This book will serve as your ultimate guide to building compelling user interfaces with React and Material Design. Filled with practical and to-the-point recipes, you will learn how to implement sophisticated-UI components. To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we will look at the various styling solutions to design appealing user interfaces including basic component styles, scoped component styles, extending component styles, moving styles to themes, and others. Basic component styles Material uses JavaScript Style Sheets (JSS) to style its components. You can apply your own JSS using the utilities provided by Material-UI. How to do it... The withStyles() function is a higher-order function that takes a style object as an argument. The function that it returns takes the component to style as an argument. Here's an example: import React, { useState } from 'react';import { withStyles } from '@material-ui/core/styles';import Card from '@material-ui/core/Card';import CardActions from '@material-ui/core/CardActions';import CardContent from '@material-ui/core/CardContent';import Button from '@material-ui/core/Button';import Typography from '@material-ui/core/Typography';const styles = theme => ({card: {width: 135,height: 135,textAlign: 'center'},cardActions: {justifyContent: 'center'}});const BasicComponentStyles = withStyles(styles)(({ classes }) => {const [count, setCount] = useState(0);const onIncrement = () => {setCount(count + 1);};return (<Card className={classes.card}><CardContent><Typography variant="h2">{count}</Typography></CardContent><CardActions className={classes.cardActions}><Button size="small" onClick={onIncrement}>Increment</Button></CardActions></Card>);});export default BasicComponentStyles; Here's what this component looks like: How it works... Let's take a closer look at the styles defined by this example: const styles = theme => ({ card: { width: 135, height: 135, textAlign: 'center' }, cardActions: { justifyContent: 'center' } }); The styles that you pass to withStyles() can be either a plain object or a function that returns a plain object, as is the case with this example. The benefit of using a function is that the theme values are passed to the function as an argument, in case your styles need access to the theme values. There are two styles defined in this example: card and cardActions. You can think of these as Cascading Style Sheets (CSS) classes. Here's what these two styles would look like as CSS: .card { width: 135 height: 135 text-align: center}.cardActions {justify-content: center} By calling withStyles(styles)(MyComponent), you're returning a new component that has a classes property. This object has all of the classes that you can apply to components now. You can't just do something such as this: <Card className="card" /> When you define your styles, they have their own build process and every class ends up getting its own generated name. This generated name is what you'll find in the classes object, so this is why you would want to use it. There's more... Instead of working with higher-order functions that return new components, you can leverage Material-UI style hooks. This example already relies on the useState() hook from React, so using another hook in the component feels like a natural extension of the same pattern that is already in place. Here's what the example looks like when refactored to take advantage of the makeStyles() function: import React, { useState } from 'react';import { makeStyles } from '@material-ui/styles';import Card from '@material-ui/core/Card';import CardActions from '@material-ui/core/CardActions';import CardContent from '@material-ui/core/CardContent';import Button from '@material-ui/core/Button';import Typography from '@material-ui/core/Typography';const useStyles = makeStyles(theme => ({card: {width: 135,height: 135,textAlign: 'center'},cardActions: {justifyContent: 'center'}}));export default function BasicComponentStyles() {const classes = useStyles();const [count, setCount] = useState(0);const onIncrement = () => {setCount(count + 1);};return (<Card className={classes.card}><CardContent><Typography variant="h2">{count}</Typography></CardContent><CardActions className={classes.cardActions}><Button size="small" onClick={onIncrement}>Increment</Button></CardActions></Card>);}The useStyles() hook is built using the makeStyles() function—which takes the exact same styles argument as withStyles(). By calling useStyles() within the component, you have your classes object. Another important thing to point out is that makeStyles is imported from @material-ui/styles, not @material-ui/core/styles. Scoped component styles Most Material-UI components have a CSS API that is specific to the component. This means that instead of having to assign a class name to the className property for every component that you need to customize, you can target specific aspects of the component that you want to change. Material-UI has laid the foundation for scoping component styles; you just need to leverage the APIs. How to do it... Let's say that you have the following style customizations that you want to apply to the Button components used throughout your application: Every button needs a margin by default. Every button that uses the contained variant should have additional top and bottom padding. Every button that uses the contained variant and the primary color should have additional top and bottom padding, as well as additional left and right padding. Here's an example that shows how to use the Button CSS API to target these three different Button types with styles: import React, { Fragment } from 'react';import { withStyles } from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const styles = theme => ({root: {margin: theme.spacing(2)},contained: {paddingTop: theme.spacing(2),paddingBottom: theme.spacing(2)},containedPrimary: {paddingLeft: theme.spacing(4),paddingRight: theme.spacing(4)}});const ScopedComponentStyles = withStyles(styles)(({ classes: { root, contained, containedPrimary } }) => (<Fragment><Button classes={{ root }}>My Default Button</Button><Button classes={{ root, contained }} variant="contained">My Contained Button</Button><Buttonclasses={{ root, contained, containedPrimary }}variant="contained"color="primary">My Contained Primary Button</Button></Fragment>));export default ScopedComponentStyles; Here's what the three rendered buttons look like: How it works... The Button CSS API takes named styles and applies them to the component. These same names are used in the styles in this code. For example, root applies to every Button component, whereas contained only applies the styles to the Button components that use the contained variant and the containedPrimary style only applies to Button components that use the contained variant and the primary color. There's more... Each style is destructured from the classes property, then applied to the appropriate Button component. However, you don't actually need to do all of this work. Since the Material-UI CSS API takes care of applying styles to components in a way that matches what you're actually targeting, you can just pass the classes directly to the buttons and get the same result. Here's a simplified version of this example: import React, { Fragment } from 'react';import { withStyles } from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const styles = theme => ({root: {margin: theme.spacing(2)},contained: {paddingTop: theme.spacing(2),paddingBottom: theme.spacing(2)},containedPrimary: {paddingLeft: theme.spacing(4),paddingRight: theme.spacing(4)}});const ScopedComponentStyles = withStyles(styles)(({ classes }) => (<Fragment><Button classes={classes}>My Default Button</Button><Button classes={classes} variant="contained">My Contained Button</Button><Button classes={classes} variant="contained" color="primary">My Contained Primary Button</Button></Fragment>));export default ScopedComponentStyles; The output looks the same because only buttons that match the constraints of the CSS API get the styles applied to them. For example, the first Button has the root, contained, and containedPrimary styles passed to the classes property, but only root is applied because it isn't using the contained variant of the primary color. The second Button also has all three styles passed to it, but only root and contained are applied. The third Button has all three styles applied to it because it meets the criteria of each style. Extending component styles You can extend styles that you apply to one component with styles that you apply to another component. Since your styles are JavaScript objects, one option is to extend one style object with another. The only problem with this approach is that you end up with a lot of duplicate styles properties in the CSS output. A better alternative is to use the jss extend plugin. How to do it... Let's say that you want to render three buttons and share some of the styles among them. One approach is to extend generic styles with more specific styles using the jss extend plugin. Here's how to do it: import React, { Fragment } from 'react';import { JssProvider, jss } from 'react-jss';import {withStyles,createGenerateClassName} from '@material-ui/styles';import {createMuiTheme,MuiThemeProvider} from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const styles = theme => ({root: {margin: theme.spacing(2)},contained: {extend: 'root',paddingTop: theme.spacing(2),paddingBottom: theme.spacing(2)},containedPrimary: {extend: 'contained',paddingLeft: theme.spacing(4),paddingRight: theme.spacing(4)}});const App = ({ children }) => (<JssProviderjss={jss}generateClassName={createGenerateClassName()}><MuiThemeProvider theme={createMuiTheme()}>{children}</MuiThemeProvider></JssProvider>);const Buttons = withStyles(styles)(({ classes }) => (<Fragment><Button className={classes.root}>My Default Button</Button><Button className={classes.contained} variant="contained">My Contained Button</Button><ButtonclassName={classes.containedPrimary}variant="contained"color="primary">My Contained Primary Button</Button></Fragment>));const ExtendingComponentStyles = () => (<App><Buttons /></App>);export default ExtendingComponentStyles; Here's what the rendered buttons look like: How it works... The easiest way to use the jss extend plugin in your Material-UI application is to use the default JSS plugin presets, which includes jss extend. Material-UI has several JSS plugins installed by default, but jss extend isn't one of them. Let's take a look at the App component in this example to see how this JSS plugin is made available: const App = ({ children }) => ( <JssProvider jss={jss} generateClassName={createGenerateClassName()} > <MuiThemeProvider theme={createMuiTheme()}> {children} </MuiThemeProvider> </JssProvider> ); The JssProvider component is how JSS is enabled in Material-UI applications. Normally, you wouldn't have to interface with it directly, but this is necessary when adding a new JSS plugin. The jss property takes the JSS preset object that includes the jss extend plugin. The generateClassName property takes a function from Material-UI that helps generate class names that are specific to Material-UI. Next, let's take a closer look at some styles: const styles = theme => ({ root: { margin: theme.spacing(2) }, contained: { extend: 'root', paddingTop: theme.spacing(2), paddingBottom: theme.spacing(2) }, containedPrimary: { extend: 'contained', paddingLeft: theme.spacing(4), paddingRight: theme.spacing(4) } }); The extend property takes the name of a style that you want to extend. In this case, the contained style extends root. The containedPrimary extends contained and root. Now let's take a look at how this translates into CSS. Here's what the root style looks like: .Component-root-1 { margin: 16px; } Next, here's the contained style: .Component-contained-2 { margin: 16px; padding-top: 16px; padding-bottom: 16px; } Finally, here's the containedPrimary style: .Component-containedPrimary-3 { margin: 16px; padding-top: 16px; padding-left: 32px; padding-right: 32px; padding-bottom: 16px; } Note that the properties from the more-generic properties are included in the more-specific styles. There are some properties duplicated, but this is in CSS, instead of having to duplicate JavaScript object properties. Furthermore, you could put these extended styles in a more central location in your code base, so that multiple components could use them. Moving styles to themes As you develop your Material-UI application, you'll start to notice style patterns that repeat themselves. In particular, styles that apply to one type of component, such as buttons, evolve into a theme. How to do it... Let's revisit the example from the Scoped component styles section: import React, { Fragment } from 'react';import { withStyles } from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const styles = theme => ({root: {margin: theme.spacing(2)},contained: {paddingTop: theme.spacing(2),paddingBottom: theme.spacing(2)},containedPrimary: {paddingLeft: theme.spacing(4),paddingRight: theme.spacing(4)}});const ScopedComponentStyles = withStyles(styles)(({ classes }) => (<Fragment><Button classes={classes}>My Default Button</Button><Button classes={classes} variant="contained">My Contained Button</Button><Button classes={classes} variant="contained" color="primary">My Contained Primary Button</Button></Fragment>));export default ScopedComponentStyles; Here's what these buttons look like after they have these styles applied to them: Now, let's say you've implemented these same styles in several places throughout your app because this is how you want your buttons to look. At this point, you've evolved a simple component customization into a theme. When this happens, you shouldn't have to keep implementing the same styles over and over again. Instead, the styles should be applied automatically by using the correct component and the correct property values. Let's move these styles into theme: import React from 'react';import {createMuiTheme,MuiThemeProvider} from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const defaultTheme = createMuiTheme();const theme = createMuiTheme({overrides: {MuiButton: {root: {margin: 16},contained: {paddingTop: defaultTheme.spacing(2),paddingBottom: defaultTheme.spacing(2)},containedPrimary: {paddingLeft: defaultTheme.spacing(4),paddingRight: defaultTheme.spacing(4)}}}});const MovingStylesToThemes = ({ classes }) => (<MuiThemeProvider theme={theme}><Button>My Default Button</Button><Button variant="contained">My Contained Button</Button><Button variant="contained" color="primary">My Contained Primary Button</Button></MuiThemeProvider>);export default MovingStylesToThemes; Now, you can use Button components without having to apply the same styles every time. How it works... Let's take a closer look at how your styles fit into a Material-UI theme: overrides: { MuiButton: { root: { margin: 16 }, contained: { paddingTop: defaultTheme.spacing(2), paddingBottom: defaultTheme.spacing(2) }, containedPrimary: { paddingLeft: defaultTheme.spacing(4), paddingRight: defaultTheme.spacing(4) } } } The overrides property is an object that allows you to override the component-specific properties of the theme. In this case, it's the MuiButton component styles that you want to override. Within MuiButton, you have the same CSS API that is used to target specific aspects of components. This makes moving your styles into the theme straightforward because there isn't much to change. One thing that did have to change in this example is the way spacing works. In normal styles that are applied via withStyles(), you have access to the current theme because it's passed in as an argument. You still need access to the spacing data, but there's no theme argument because you're not in a function. Since you're just extending the default theme, you can access it by calling createMuiTheme() without any arguments, as this example shows. This article explored some of the ways you can apply styles to Material-UI components of your React applications. There are many other styling options available to your Material-UI app beyond withStyles(). There's the styled() higher-order component function that emulates styled components. You can also jump outside the Material-UI style system and use inline CSS styles or import CSS modules and apply those styles. If you found this post useful, do check out the book, React Material-UI Cookbook by Adam Boduch.  This book will help you build modern-day applications by implementing Material Design principles in React applications using Material-UI. Keeping animations running at 60 FPS in a React Native app [Tutorial] React Native development tools: Expo, React Native CLI, CocoaPods [Tutorial] Building a Progressive Web Application with Create React App 2 [Tutorial]
Read more
  • 0
  • 0
  • 41291
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-bryan-cantrill-on-the-changing-ethical-dilemmas-in-software-engineering
Vincy Davis
17 May 2019
6 min read
Save for later

Bryan Cantrill on the changing ethical dilemmas in Software Engineering

Vincy Davis
17 May 2019
6 min read
Earlier this month at the Craft Conference in Budapest, Bryan Cantrill (Chief Technology Officer at Joyent) gave a talk on “Andreessen's Corollary: Ethical Dilemmas in Software Engineering”. In 2011, Marc Andreessen had penned an essay ‘Why Software Is Eating The World’ in The Wall Street Journal. In the article, he’s talking about how software is present in all fields and are poised to take over large swathes of the economy. He believed, way back in 2011, that “many of the prominent new Internet companies are building real, high-growth, high-margin, highly defensible businesses.” Eight years later, Bryan Cantrill believes this prophecy is clearly coming to fulfillment. According to the article ‘Software engineering code of ethics’ published in 1997 by ACM (Association for Computing Machinery), a code is not a simple ethical program that generates ethical judgements. In some situations, these codes can generate conflict with each other. This will require a software engineer to use ethical judgement that will be consistent in terms of ethics. The article provides certain principles for software engineers to follow. According to Bryan, these principles are difficult to follow. Some of the principles expect software engineers to ensure that their product on which they are working is useful and of acceptable quality to the public, the employer, the client and user, is completed on time and of reasonable cost & free of errors. The codes specifications should be well documented according to the user’s requirements and have the client’s approval.  The codes should have appropriate methodology and good management. Software engineers should ensure realistic estimate of cost, scheduling, and outcome of any project on which they work or propose to work. The guiding context surrounding the code of ethics remains timeless, but as time has changed, these principles have become old fashioned. With the immense use of software and industries implementing these codes, it’s difficult for software engineers to follow these old principles and be ethically sound. Bryan calls this era as an ‘ethical grey area’ for software engineers. The software's contact with our broader world, has brought with it novel ethical dilemmas for those who endeavor to build it.  More than ever, software engineers are likely to find themselves in new frontiers with respect to society, the law or their own moral compass. Often without any formal training or even acknowledgement with respect to the ethical dimensions of their work, software engineers have to make ethical judgments. Ethical dilemmas in software development since Andreessen’s prophecy 2012 : Facebook started using emotional manipulation by beginning to perform experiments in the name of dynamic research or to generate revenue. The posts were determined to be positive or negative. 2013 : ‘Zenefits’ is a Silicon Valley startup. In order to build a software, they had to be certified by the  state of California. For which, they had to sit through 52 hours of training studying the materials through the web browser. The manager created a hack called ‘Macro’ that made it possible to complete the pre-licensing education requirement in less than 52 hours. This was passed on to almost 100 Zenefit employees to automate the process for them too. 2014 : Uber illegally entered the Portland market, with a software called ‘Greyball’. This software was used by uber to intentionally evade Portland Bureau of Transportation (PBOT) officers and deny separate ride requests. 2015 : Google started to mislabel photo captions. One of the times, Google mistakenly identified a dark skinned individual as a ‘Gorilla’. They offered a prompt reaction and removed the photo. This highlighted a real negative point of Artificial Intelligence (AI), that AI relies on biased human classification, at times using repeated patterns. Google was facing the problem of defending this mistake, as Google had not intentionally misled its network with such wrong data. 2016 : The first Tesla ‘Autopilot’ car was launched. It had traffic avoiding cruise control and steering assists features but was sold and marketed as a autopilot car. In an accident, the driver was killed, maybe because he believed that the car will drive itself. This was a serious problem. Tesla was using two cameras to judge the movements while driving. It should be understood that this Tesla car was just an enhancement to the driver and not a replacement. 2017 : Facebook faced the ire of the anti- Rohingya violence in Myanmar. Facebook messages were used to coordinate a effective genocide against the Rohingya, a mostly Muslim minority community, where 75000 people died. Facebook did not enable it or advocate it. It was a merely a communication platform, used for a wrong purpose. But Facebook could have helped to reduce the gravity of the situation by acting promptly and not allowing such messages to be circulated. This shows how everything should not be automated and human judgement cannot be replaced anytime soon. 2018 : In the wake of Pittsburg shooting, the alleged shooter had used the Gab platform to post against the Jews. Gab, which bills itself as "the free speech social network," is small compared to mainstream social media platforms but it has an avid user base. Joyent provided infrastructure to Gab, but quickly removed them from their platform, after the horrific incident. 2019 : After the 737 MAX MCAS AND JT/610 /ET302 crashes, reports emerged that aircraft's MCAS system played a role in the crash. The crash happened because a faulty sensor erroneously reported that the airplane was stalling. The false report triggered an automated system known as the Maneuvering Characteristics Augmentation System (MCAS). MCAS is activated without the pilot’s input. The crew confirmed that the manual trim operation was not working. These are some of the examples of ethical dilemmas in the post-Andreessen’s prophecy. As seen, all the incidents were the result of ethical decisions gone wrong. It is clear that ‘what is right for software is not necessarily right for society.’ How to deal with these ethical dilemmas? In the summer of 2018, the ACM came up with a new code of Ethics: Contribute to society and human well being Avoid harm Be honest and trustworthy It has also included an Integrity project which will have case studies and  “Ask an Ethicist” feature. These efforts by ACM will help software engineers facing ethical dilemmas. This will also pave way for great discussions resulting in a behavior consistent with the code of ethics. Organisations should encourage such discussions. This will help like minded people to perpetuate a culture of consideration of ethical consequences. As software’s footprint continues to grow, the ethical dilemmas of software engineers will only expand. These Ethical dilemmas are Andreessen’s corollary. And software engineers must address them collectively and directly. Software engineers agree with this evolving nature of ethical dilemmas. https://twitter.com/MA_Hanin/status/1129082836512911360 Watch the talk by Bryan Cantrill at Craft Conference. All coding and no sleep makes Jack/Jill a dull developer, research confirms Red Badger Tech Director Viktor Charypar talks monorepos, lifelong learning, and the challenges facing open source software Google AI engineers introduce Translatotron, an end-to-end speech-to-speech translation model
Read more
  • 0
  • 0
  • 20564

article-image-stack-overflow-confirms-production-systems-hacked
Vincy Davis
17 May 2019
2 min read
Save for later

Stack Overflow confirms production systems hacked

Vincy Davis
17 May 2019
2 min read
Almost after a week of the attack, Stack Overflow admitted in an official security update yesterday, that their production systems has been hacked. “Over the weekend, there was an attack on Stack Overflow. We have confirmed that some level of production access was gained on May 11”, said Mary Ferguson ,VP of Engineering at Stack Overflow. In this short update, the company has mentioned that they are investigating the extent of the access and are addressing all the known vulnerabilities. Though not confirmed, the company has identified no breach of customer or user data. https://twitter.com/gcluley/status/1129260135778607104 Some users are acknowledging the fact that that the firm has at least come forward and accepted the security violation. A user on Reddit said, “Wow. I'm glad they're letting us know early, but this sucks” There are other users who think that security breach due to hacking is very common nowadays. A user on Hacker News commented, “I think we've reached a point where it's safe to say that if you're using a service -any service - assume your data is breached (or willingly given) and accessible to some unknown third party. That third party can be the government, it can be some random marketer or it can be a malicious hacker. Just hope that you have nothing anywhere that may be of interest or value to anyone, anywhere. Good luck.” Few days ago, there were reports that Stack Overflow directly links to Facebook profile pictures. This means that the linking unintentionally allows user activity throughout Stack Exchange to be tracked by Facebook and also tracks the topics that the users are interested in. Read More: Facebook again, caught tracking Stack Overflow user activity and data Stack Overflow has also assured users that more information will be provided to them, once the company concludes the investigation. Stack Overflow survey data further confirms Python’s popularity as it moves above Java in the most used programming language list 2019 Stack Overflow survey: A quick overview Stack Overflow is looking for a new CEO as Joel Spolsky becomes Chairman
Read more
  • 0
  • 0
  • 16586

article-image-8-tech-companies-and-18-governments-sign-the-christchurch-call-to-curb-online-extremism-the-us-backs-off
Sugandha Lahoti
16 May 2019
6 min read
Save for later

8 tech companies and 18 governments sign the Christchurch Call to curb online extremism; the US backs off

Sugandha Lahoti
16 May 2019
6 min read
Yesterday, New Zealand Prime Minister, Jacinda Ardern, and French President, Emmanuel Macron brought together world governments and leaders from the tech sector to adopt the Christchurch Call. This call is a non-binding agreement to eliminate terrorist and violent extremist content online after the Christ church terror attack, which took place March 15 and killed 51 Muslim worshipers in Christchurch mosques in New Zealand. Tech companies scrambled to take action due to the speed and volume of content which was live streamed on Facebook and then subsequently uploaded, reuploaded and shared by the users worldwide. “The Christchurch Call is a commitment by Governments and tech companies to eliminate terrorist and violent extremist content online. It rests on the conviction that a free, open and secure internet offers extraordinary benefits to society. Respect for freedom of expression is fundamental. However, no one has the right to create and share terrorist and violent extremist content online.”, reads the call. These tech companies and governments will also be developing tools to prevent the upload of terrorist and violent extremist content; countering the roots of violent extremism; increasing transparency around the removal and detection of content, and reviewing how companies’ algorithms to direct users to violent extremist content. Representatives from Facebook, Google, Microsoft, Twitter, and other tech giants have also agreed to identify "appropriate checks on live streaming, aimed at reducing the risk of disseminating terrorist and violent extremist content online". Two days ago in a statement, Facebook declared that from now on they will start restricting users from using Facebook Live if they break certain rules-including their Dangerous Organizations and Individuals policy. In addition to signing the Christchurch Call, Amazon, Facebook, Google, Twitter, and Microsoft are publishing nine steps that they will take to address the abuse of technology to spread terrorist and violent extremist content. These nine steps include five individual actions that each company is committing to take and a further four collaborative actions they’ll take together. Terms of Use user reporting of terrorist and violent extremist content; enhancing technology; live streaming, and transparency reports. Also included were four collaborative actions such as shared technology development, crisis protocols, educating the public about terrorist and extremist violent content online and combating hate and bigotry. Also, to be noted, Twitter CEO Jack Dorsey was the only social media CEO to attend the Call. Facebook’s Mark Zuckerberg was a no show with Facebook’s head of global affairs Nick Clegg attending the call. Microsoft was represented by President Brad Smith while Wikimedia was represented by Wikipedia founder Jimmy Wales. Senior Vice President for Global Affairs Kent Walker participated on behalf of Google. https://twitter.com/Policy/status/1128693729181749248 The signing of the Christchurch Call was organized around a meeting of digital ministers from the Group of 7 nations this week in Paris. Until now 18 countries and tech organizations have joined the call, pledging to counter the drivers of terrorism and violent extremism, and ensure effective enforcement of applicable laws. The Call was adopted at the meeting by France, New Zealand, Canada, Indonesia, Ireland, Jordan, Norway, Senegal, the UK, and the European Commission. Other countries who have adopted the Call but were not at the meeting are Australia, Germany, India, Italy, Japan, the Netherlands, Spain, and Sweden. Sri Lanka has not signed the agreement. In April, Sri Lanka has massive bombings on churches and hotels, which killed 321 people and led to the Sri Lankan government issuing a blanket social media ban in the weeks to come for fear of them inciting further violence, fear and spread of misinformation. Sri Lanka’s junior minister for defense, Ruwan Wijewardene, in an initial investigation had revealed the bombings were related to New Zealand mosque attack. However, Ardern denied those claims stating that the New Zealand government was not aware of any such intelligence. Interestingly, the U.S also refused to join other nations to pledge the Christchurch Call. A statement from the White House reads, “While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the Call. We will continue to engage governments, industry, and civil society to counter terrorist content on the Internet,” “We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press,” the statement reads. “Further, we maintain that the best tool to defeat terrorist speech is productive speech, and thus we emphasize the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging.” Critics shot down White House’s refusal to pledge to the non-binding Christchurch Call. https://twitter.com/cwarzel/status/1128701486144282624 https://twitter.com/MarkMellman/status/1128691586001326082 https://twitter.com/StollmeyerEU/status/1128782543585775616 https://twitter.com/JamilSmith/status/1128742138903056385 The Christchurch call is the first time when governments and tech companies have jointly agreed to a set of commitments and ongoing collaboration to make the internet safer. “We can be proud of what we have started with the adoption of the Christchurch Call. We’ve taken practical steps to try and stop what we experienced in Christchurch from happening again” Jacinda Ardern said. “From here, I will work alongside others signed up to the Christchurch Call to bring more partners on board, and develop a range of practical initiatives to ensure the pledge we have made today is delivered”, she added. Digital ministers of the Group of 7 nations, who signed the Christ Church call are also meeting today to discuss an upcoming charter that would cover broader territory than the Christchurch Call on toxic content and tech regulation in general. https://twitter.com/LeoVaradkar/status/1128771650391093250 https://twitter.com/adamihad/status/1128298612734201858 Although critics praised the call for its use of existing laws against extremist and terrorist content and its insistence that legal and platform content regulation measures comply with intelligent human rights law, they also listed certain shortcomings. https://twitter.com/mediamorphis/status/1128773160357302273 Read the full text of Christchurch call here. Google and Facebook working hard to clean image after the media backlash from the Christchurch terrorist attack Facebook tightens rules around live streaming in response to the Christchurch terror attack How social media enabled and amplified the Christchurch terrorist attack
Read more
  • 0
  • 0
  • 14025

article-image-implementing-autocompletion-in-a-react-material-ui-application-tutorial
Bhagyashree R
16 May 2019
14 min read
Save for later

Implementing autocompletion in a React Material UI application [Tutorial]

Bhagyashree R
16 May 2019
14 min read
Web applications typically provide autocomplete input fields when there are too many choices to select from. Autocomplete fields are like text input fields—as users start typing, they are given a smaller list of choices based on what they've typed. Once the user is ready to make a selection, the actual input is filled with components called Chips—especially relevant when the user needs to be able to make multiple selections. In this article, we will start by building an Autocomplete component. Then we will move on to implementing multi-value selection and see how to better serve the autocomplete data through an API. To help our users better understand the results we will also implement a feature that highlights the matched portion of the string value. This article is taken from the book React Material-UI Cookbook by Adam Boduch. This book will serve as your ultimate guide to building compelling user interfaces with React and Material Design. To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. Building an Autocomplete component Material-UI doesn't actually come with an Autocomplete component. The reason is that, since there are so many different implementations of autocomplete selection components in the React ecosystem already, it doesn't make sense to provide another one. Instead, you can pick an existing implementation and augment it with Material-UI components so that it can integrate nicely with your Material-UI application. How to do it? You can use the Select component from the react-select package to provide the autocomplete functionality that you need. You can use Select properties to replace key autocomplete components with Material-UI components so that the autocomplete matches the look and feel of the rest of your app. Let's make a reusable Autocomplete component. The Select component allows you to replace certain aspects of the autocomplete experience. In particular, the following are the components that you'll be replacing: Control: The text input component to use Menu: A menu with suggestions, displayed when the user starts typing NoOptionsMessage: The message that's displayed when there aren't any suggestions to display Option: The component used for each suggestion in Menu Placeholder: The placeholder text component for the text input SingleValue: The component for showing a value once it's selected ValueContainer: The component that wraps SingleValue IndicatorSeparator: Separates buttons on the right side of the autocomplete ClearIndicator: The component used for the button that clears the current value DropdownIndicator: The component used for the button that shows Menu Each of these components is replaced with Material-UI components that change the look and feel of the autocomplete. Moreover, you'll have all of this as new Autocomplete components that you can reuse throughout your app. Let's look at the result before diving into the implementation of each replacement component. Following is what you'll see when the screen first loads: If you click on the down arrow, you'll see a menu with all the values, as follows: Try typing tor into the autocomplete text field, as follows: If you make a selection, the menu is closed and the text field is populated with the selected value, as follows: You can change your selection by opening the menu and selecting another value, or you can clear the selection by clicking on the clear button to the right of the text. How does it work? Let's break down the source by looking at the individual components that make up the Autocomplete component and replacing pieces of the Select component. Then, we'll look at the final Autocomplete component. Text input control Here's the source for the Control component: const inputComponent = ({ inputRef, ...props }) => ( <div ref={inputRef} {...props} /> ); const Control = props => ( <TextField fullWidth InputProps={{ inputComponent, inputProps: { className: props.selectProps.classes.input, inputRef: props.innerRef, children: props.children, ...props.innerProps } }} {...props.selectProps.textFieldProps} /> ); The inputComponent() function is a component that passes the inputRef value—a reference to the underlying input element—to the ref prop. Then, inputComponent is passed to InputProps to set the input component used by TextField. This component is a little bit confusing because it's passing references around and it uses a helper component for this purpose. The important thing to remember is that the job of Control is to set up the Select component to use a Material-UITextField component. Options menu Here's the component that displays the autocomplete options when the user starts typing or clicks on the down arrow: const Menu = props => ( <Paper square className={props.selectProps.classes.paper} {...props.innerProps} > {props.children} </Paper> ); The Menu component renders a Material-UI Paper component so that the element surrounding the options is themed accordingly. No options available Here's the NoOptionsMessage component. It is rendered when there aren't any autocomplete options to display, as follows: const NoOptionsMessage = props => ( <Typography color="textSecondary" className={props.selectProps.classes.noOptionsMessage} {...props.innerProps} > {props.children} </Typography> ); This renders a Typography component with textSecondary as the color property value. Individual option Individual options that are displayed in the autocomplete menu are rendered using the MenuItem component, as follows: const Option = props => ( <MenuItem buttonRef={props.innerRef} selected={props.isFocused} component="div" style={{ fontWeight: props.isSelected ? 500 : 400 }} {...props.innerProps} > {props.children} </MenuItem> ); The selected and style properties alter the way that the item is displayed, based on the isSelected and isFocused properties. The children property sets the value of the item. Placeholder text The Placeholder text of the Autocomplete component is shown before the user types anything or makes a selection, as follows: const Placeholder = props => ( <Typography color="textSecondary" className={props.selectProps.classes.placeholder} {...props.innerProps} > {props.children} </Typography> ); The Material-UI Typography component is used to theme the Placeholder text. SingleValue Once again, the Material-UI Typography component is used to render the selected value from the menu within the autocomplete input, as follows: const SingleValue = props => ( <Typography className={props.selectProps.classes.singleValue} {...props.innerProps} > {props.children} </Typography> ); ValueContainer The ValueContainer component is used to wrap the SingleValue component with a div and the valueContainer CSS class, as follows: const ValueContainer = props => ( <div className={props.selectProps.classes.valueContainer}> {props.children} </div> ); IndicatorSeparator By default, the Select component uses a pipe character as a separator between the buttons on the right side of the autocomplete menu. Since they're going to be replaced by Material-UI button components, this separator is no longer necessary, as follows: const IndicatorSeparator = () => null; By having the component return null, nothing is rendered. Clear option indicator This button is used to clear any selection made previously by the user, as follows: const ClearIndicator = props => ( <IconButton {...props.innerProps}> <CancelIcon /> </IconButton> ); The purpose of this component is to use the Material-UI IconButton component and to render a Material-UI icon. The click handler is passed in through innerProps. Show menu indicator Just like the ClearIndicator component, the DropdownIndicator component replaces the button used to show the autocomplete menu with an icon from Material-UI, as follows: const DropdownIndicator = props => ( <IconButton {...props.innerProps}> <ArrowDropDownIcon /> </IconButton> ); Styles Here are the styles used by the various sub-components of the autocomplete: const useStyles = makeStyles(theme => ({ root: { flexGrow: 1, height: 250 }, input: { display: 'flex', padding: 0 }, valueContainer: { display: 'flex', flexWrap: 'wrap', flex: 1, alignItems: 'center', overflow: 'hidden' }, noOptionsMessage: { padding: `${theme.spacing(1)}px ${theme.spacing(2)}px` }, singleValue: { fontSize: 16 }, placeholder: { position: 'absolute', left: 2, fontSize: 16 }, paper: { position: 'absolute', zIndex: 1, marginTop: theme.spacing(1), left: 0, right: 0 } })); The Autocomplete Finally, following is the Autocomplete component that you can reuse throughout your application: Autocomplete.defaultProps = { isClearable: true, components: { Control, Menu, NoOptionsMessage, Option, Placeholder, SingleValue, ValueContainer, IndicatorSeparator, ClearIndicator, DropdownIndicator }, options: [ { label: 'Boston Bruins', value: 'BOS' }, { label: 'Buffalo Sabres', value: 'BUF' }, { label: 'Detroit Red Wings', value: 'DET' }, { label: 'Florida Panthers', value: 'FLA' }, { label: 'Montreal Canadiens', value: 'MTL' }, { label: 'Ottawa Senators', value: 'OTT' }, { label: 'Tampa Bay Lightning', value: 'TBL' }, { label: 'Toronto Maple Leafs', value: 'TOR' }, { label: 'Carolina Hurricanes', value: 'CAR' }, { label: 'Columbus Blue Jackets', value: 'CBJ' }, { label: 'New Jersey Devils', value: 'NJD' }, { label: 'New York Islanders', value: 'NYI' }, { label: 'New York Rangers', value: 'NYR' }, { label: 'Philadelphia Flyers', value: 'PHI' }, { label: 'Pittsburgh Penguins', value: 'PIT' }, { label: 'Washington Capitals', value: 'WSH' }, { label: 'Chicago Blackhawks', value: 'CHI' }, { label: 'Colorado Avalanche', value: 'COL' }, { label: 'Dallas Stars', value: 'DAL' }, { label: 'Minnesota Wild', value: 'MIN' }, { label: 'Nashville Predators', value: 'NSH' }, { label: 'St. Louis Blues', value: 'STL' }, { label: 'Winnipeg Jets', value: 'WPG' }, { label: 'Anaheim Ducks', value: 'ANA' }, { label: 'Arizona Coyotes', value: 'ARI' }, { label: 'Calgary Flames', value: 'CGY' }, { label: 'Edmonton Oilers', value: 'EDM' }, { label: 'Los Angeles Kings', value: 'LAK' }, { label: 'San Jose Sharks', value: 'SJS' }, { label: 'Vancouver Canucks', value: 'VAN' }, { label: 'Vegas Golden Knights', value: 'VGK' } ] }; The piece that ties all of the previous components together is the components property that's passed to Select. This is actually set as a default property in Autocomplete, so it can be further overridden. The value passed to components is a simple object that maps the component name to its implementation. Selecting autocomplete suggestions In the previous section, you built an Autocomplete component capable of selecting a single value. Sometimes, you need the ability to select multiple values from an Autocomplete component. The good news is that, with a few small additions, the component that you created in the previous section already does most of the work. How to do it? Let's walk through the additions that need to be made in order to support multi-value selection in the Autocomplete component, starting with the new MultiValue component, as follows: const MultiValue = props => ( <Chip tabIndex={-1} label={props.children} className={clsx(props.selectProps.classes.chip, { [props.selectProps.classes.chipFocused]: props.isFocused })} onDelete={props.removeProps.onClick} deleteIcon={<CancelIcon {...props.removeProps} />} /> ); The MultiValue component uses the Material-UI Chip component to render a selected value. In order to pass MultiValue to Select, add it to the components object that's passed to Select: components: { Control, Menu, NoOptionsMessage, Option, Placeholder, SingleValue, MultiValue, ValueContainer, IndicatorSeparator, ClearIndicator, DropdownIndicator }, Now you can use your Autocomplete component for single value selection, or for multi-value selection. You can add the isMulti property with a default value of true to defaultProps, as follows: isMulti: true, Now, you should be able to select multiple values from the autocomplete. How does it work? Nothing looks different about the autocomplete when it's first rendered, or when you show the menu. When you make a selection, the Chip component is used to display the value. Chips are ideal for displaying small pieces of information like this. Furthermore, the close button integrates nicely with it, making it easy for the user to remove individual selections after they've been made. Here's what the autocomplete looks like after multiple selections have been made: API-driven Autocomplete You can't always have your autocomplete data ready to render on the initial page load. Imagine trying to load hundreds or thousands of items before the user can interact with anything. The better approach is to keep the data on the server and supply an API endpoint with the autocomplete text as the user types. Then you only need to load a smaller set of data returned by the API. How to do it? Let's rework the example from the previous section. We'll keep all of the same autocomplete functionality, except that, instead of passing an array to the options property, we'll pass in an API function that returns a Promise. Here's the API function that mocks an API call that resolves a Promise: const someAPI = searchText => new Promise(resolve => { setTimeout(() => { const teams = [ { label: 'Boston Bruins', value: 'BOS' }, { label: 'Buffalo Sabres', value: 'BUF' }, { label: 'Detroit Red Wings', value: 'DET' }, ... ]; resolve( teams.filter( team => searchText && team.label .toLowerCase() .includes(searchText.toLowerCase()) ) ); }, 1000); }); This function takes a search string argument and returns a Promise. The same data that would otherwise be passed to the Select component in the options property is filtered here instead. Think of anything that happens in this function as happening behind an API in a real app. The returned Promise is then resolved with an array of matching items following a simulated latency of one second. You also need to add a couple of components to the composition of the Select component (we're up to 13 now!), as follows: const LoadingIndicator = () => <CircularProgress size={20} />; const LoadingMessage = props => ( <Typography color="textSecondary" className={props.selectProps.classes.noOptionsMessage} {...props.innerProps} > {props.children} </Typography> ); The LoadingIndicator component is shown on the right the autocomplete text input. It's using the CircularProgress component from Material-UI to indicate that the autocomplete is doing something. The LoadingMessage component follows the same pattern as the other text replacement components used with Select in this example. The loading text is displayed when the menu is shown, but the Promise that resolves the options is still pending. Lastly, there's the Select component. Instead of using Select, you need to use the AsyncSelect version, as follows: import AsyncSelect from 'react-select/lib/Async'; Otherwise, AsyncSelect works the same as Select, as follows: <AsyncSelect value={value} onChange={value => setValue(value)} textFieldProps={{ label: 'Team', InputLabelProps: { shrink: true } }} {...{ ...props, classes }} /> How does it work? The only difference between a Select autocomplete and an AsyncSelect autocomplete is what happens while the request to the API is pending. Here is what the autocomplete looks like while this is happening: As the user types the CircularProgress component is rendered to the right, while the loading message is rendered in the menu using a Typography component. Highlighting search results When the user starts typing in an autocomplete and the results are displayed in the dropdown, it isn't always obvious how a given item matches the search criteria. You can help your users better understand the results by highlighting the matched portion of the string value. How to do it? You'll want to use two functions from the autosuggest-highlight package to help highlight the text presented in the autocomplete dropdown, as follows: import match from 'autosuggest-highlight/match'; import parse from 'autosuggest-highlight/parse'; Now, you can build a new component that will render the item text, highlighting as and when necessary, as follows: const ValueLabel = ({ label, search }) => { const matches = match(label, search); const parts = parse(label, matches); return parts.map((part, index) => part.highlight ? ( <span key={index} style={{ fontWeight: 500 }}> {part.text} </span> ) : ( <span key={index}>{part.text}</span> ) ); }; The end result is that ValueLabel renders an array of span elements, determined by the parse() and match() functions. One of the spans will be bolded if part.highlight is true. Now, you can use ValueLabel in the Option component, as follows: const Option = props => ( <MenuItem buttonRef={props.innerRef} selected={props.isFocused} component="div" style={{ fontWeight: props.isSelected ? 500 : 400 }} {...props.innerProps} > <ValueLabel label={props.children} search={props.selectProps.inputValue} /> </MenuItem> ); How does it work? Now, when you search for values in the autocomplete text input, the results will highlight the search criteria in each item, as follows: This article helped you implement autocompletion in your Material UI React application.  Then we implemented multi-value selection and saw how to better serve the autocomplete data through an API endpoint. If you found this post useful, do check out the book, React Material-UI Cookbook by Adam Boduch.  This book will help you build modern-day applications by implementing Material Design principles in React applications using Material-UI. How to create a native mobile app with React Native [Tutorial] Reactive programming in Swift with RxSwift and RxCocoa [Tutorial] How to build a Relay React App [Tutorial]
Read more
  • 0
  • 0
  • 50489
article-image-doteveryone-report-claims-the-absence-of-ethical-frameworks-and-support-mechanisms-could-lead-to-a-brain-drain-in-the-u-k-tech-industry
Richard Gall
13 May 2019
5 min read
Save for later

Doteveryone report claims the absence of ethical frameworks and support mechanisms could lead to a 'brain drain' in the U.K. tech industry

Richard Gall
13 May 2019
5 min read
Ethics in the tech industry is an issue that's drawing a lot of attention from policy makers and business leaders alike. But the opinions of the people who actually work in tech have, for a long time, been absent from the conversation. However, UK-based think tank Doteveryone are working hard to change that. In a report published today - People Power and Technology: The Tech Workers' View - the think tank sheds light on tech workers' perspectives on the impact of their work on society. It suggests that not only that there are significant tensions between the people who do the work, and the people they're working for, but also that much of the tech workforce would welcome more regulation and support within the tech industry. Backed by responses from more than 1,000 tech workers, the Doteveryone report unearthed some interesting trends: 18% of respondents has left their job because they felt uncomfortable about the ethics of certain ethical decisions. 45% of respondents said they thought there was too little regulation of tech (14% said there was too much). Those working in artificial intelligence had more experience or a more acute awareness of ethical issues - 27% of respondents said they had left their job as a consequence of a decision (compared with 18% overall). What are the biggest ethical challenges for tech workers, according to Doteveryone? Respondents to the survey were concerned about a wide range of issues. Many of these are ones that have been well-documented over the last year or so, but the report nevertheless demonstrates how those closest to the technology that's driving significant  - and potentially negative - change see those issues. One respondent quoted in the survey mentioned the issue of testing - something that often gets missed in broad and abstract conversations about privacy and automation. They said that their product "was not properly tested, and the company was too quick" to put it on the market. Elsewhere, another respondent discussed a problem with the metrics being chased by software companies, where clicks and usage are paramount. "Some features tend to be addictive," they said. Other respondents mentioned "the exploitation of personal data" and "mass unemployment," indicating that even those at the center of the tech industry harbour serious concerns about the impact of technology on society. However, one of the most interesting was one that we have discussed a lot - the importance of communication. "Too much technology less communication [sic] between people," said one respondent. In a nutshell, that seems to encapsulate Doteveryone's overall mission - a greater focus on the human element of technology, and resisting the seductions of technological solutionism. Read next: Mozilla and Google Chrome refuse to support Gab’s Dissenter extension for violating acceptable use policy How the absence of an ethical framework could cause brain drain in the U.K. tech sector The Doteveryone report points out that the challenges tech workers face, and the level of discomfort they feel in their respective jobs could lead to retention problems across the industry. The report states: The UK tech industry has major concerns about the availability of staff. 93% of employers have struggled to recruit to tech roles in the past year, with shortages most acute for management and experienced professionals. Brexit is expected to exacerbate these issues. Each lost tech worker is estimated to cost a company over £30,000. Our findings show that potentially irresponsible technology practices are a significant factor for retention and it’s vital that these are addressed for the industry to thrive. The key takeaway, then, is that the instability and pressure that comes with working in a job or industry where innovation is everything requires a frameworks and support mechanisms. Without them, the people actually doing the work in tech will not only find it harder to work responsibly, it will also take its toll on them personally. With issues around burnout in the tech industry stealing the headlines in recent months (think about the protests around China's 996 culture) this confirms the importance of working together to develop new ways of supporting and protecting each other. Read next: Artist Holly Herndon releases an album featuring an artificial intelligence ‘musician’ How tech workers are currently negotiating ethics The report finds that the when negotiating ethical issues, tech workers primarily turn to their "own moral compass." It shows that formal support structures feature very low, with only 29% turning to company policy, and 27% to industry standards. But this doesn't mean tech workers are only guided by their conscience - many use the internet (38%) or discuss with colleagues (35%). [caption id="attachment_27757" align="alignright" width="799"] via Doteveryone.org[/caption]                   What does Doteveryone argue needs to be done? To combat these issues, Doteveryone is very clear in its recommendations. It argues that businesses need much more transparency in their processes for raising ethical issues, increased investment in training and resources so tech workers are better equipped to engage with ethical decision making in their work, and to work collaboratively to build industry-wide standards. But the report doesn't only call on businesses to act - it also calls for government to "provide incentives for responsible innovation." This is particularly interesting, as it is a little different to the continuing mantra of regulation. It arguably changes the discussion, focusing less on rules and restrictions, and much more on actively encouraging a human-centered approach to technology and business. How the conversation around ethics and technology evolves in the years to come remains to be seen. But whatever happens, it's certainly essential that the views of those at the center of the industry are heard, and that they are empowered, as the report says, "to do their best work."
Read more
  • 0
  • 0
  • 17849

article-image-does-it-make-sense-to-talk-about-devops-engineers-or-devops-tools
Richard Gall
10 May 2019
6 min read
Save for later

Does it make sense to talk about DevOps engineers or DevOps tools?

Richard Gall
10 May 2019
6 min read
DevOps engineers are in high demand - the job represents an engineering unicorn, someone that understands both development and operations and can help to foster a culture where the relationship between the two is almost frictionless. But there's some debate as to whether it makes sense to talk about a DevOps engineer at all. If DevOps is a culture of a set of practices that improves agility and empowers engineers to take more ownership over their work, should we really be thinking about DevOps as a single job that someone can simply train for? The quotes in this piece are taken from DevOps Paradox by Viktor Farcic, which will be published in June 2019. The book features interviews with a diverse range of figures drawn from across the DevOps world. Is DevOps engineer a 'real' job or just recruitment spin? Nirmal Mehta (@normalfaults), Technology Consultant at Booz Allen Hamilton, says "There's no such thing as a DevOps engineer. There shouldn't even be a DevOps team, because to me DevOps is more of a cultural and philosophical methodology, a process, and a way of thinking about things and communicating within an IT organization..." Mehta is cyncical about organizations that put out job descriptions asking for DevOps engineers. It is, he argues, a way of cutting costs - a way of simply doing more with less. "A DevOps engineer is just a job posting that signals an organization wants to hire one less person to do twice as much work rather than hire both a developer and an operator." This view is echoed by other figures associated with the DevOps world. Mike Kail (@mdkail), CTO at Everest, says "I certainly don't view DevOps as a tool or a job title. In my view, at the core, it's a cultural approach to leveraging automation and orchestration to streamline both code development, infrastructure, application deployments and subsequently, the managing of those resources." Similarly, Damian Duportal (@DamienDuportal), Træfik's Developer Advocate, says "there is no such thing as a DevOps engineer or even a DevOps team.  The main purpose of DevOps is to focus on value, finding the optimal for the organization, and the value it will bring." For both Duportal and Kail, then, DevOps is primarily a cultural thing, something which needs to be embedded inside the practices of an organization. Is it useful to talk about a DevOps team? There are big question marks over the concept of a DevOps engineer. But what about a specific team? It's all well and good talking about organizational philosophy, but how do you actually affect change in a practical manner? Julian Simpson (@builddoctor), Neo4J's Global IT Manager is sceptical about the concept of a DevOps team: “Can we have something called a DevOps team? I don't believe so. You might spin up a team to solve a DevOps problem, but then I wouldn't even say we specifically have a DevOps problem. I'd say you just have a problem." DevOps consultant Chris Riley (@HoardingInfo) has a similar take, saying: “DevOps Engineer as a title makes sense to me, but I don't think you necessarily have DevOps departments, nor do you seek that out. Instead, I think DevOps is a principle that you spread throughout your entire development organization. Rather, you look to reform your organization in a way that supports those initiatives versus just saying that we need to build this DevOps unit, and there we go, we're done, we're DevOps. Because by doing that you really have to empower that unit and most organizations aren't willing to do that." However, Red Hat Solutions Architect Wian Vos  (@wianvos) has a different take. For Vos the idea of a DevOps team is actually crucial if you are to cultivate a DevOps mindset inside your organization: "Imagine... you and I were going to start a company. We're going to need a DevOps team because we have a burning desire to put out this awesome application. The questions we have when we're putting together a DevOps team is both ‘Who are we hiring?’ and ‘What are we hiring for? Are we going to hire DevOps engineers? No. In that team, we want the best application developers, the best tester, and maybe we want a great infrastructure guy and a frontend/backend developer. I want people with specific roles who fit together as a team to be that DevOps team." For Vos, it's not so much about finding and hiring DevOps engineers - people with a specific set of skills and experience - but rather building a team that's constructed in such a way that it can put DevOps principles into practice. Is there such a thing as a DevOps tool? One of the interesting things about DevOps is that the debate seems to lead you into a bit of a bind. It's almost as if the more concrete we try and make it - turning it into a job, or a team - the less useful it becomes. This is particularly true when we consider tooling. Surely thinking about DevOps technologically, rather than speculatively makes it more real? In general, it appears there is a consensus against the idea of DevOps tools. On this point Julian Simpson said "my original thinking about the movement from 2009 onwards, when the name was coined, was that it would be about collaboration and perhaps the tools would sort of come out of that collaboration." James Turnbull (@kartar), CEO of Rethink Robotics is critical of the notion of DevOps tools. He says "I don't think there are such things as DevOps tools. I believe there are tools that make the process of being a cross-functional team better... Any tool that facilitates building that cross-functionality is probably a DevOps tool to the point where the term is likely meaningless." When it comes to DevOps, everyone's still learning With even industry figures disagreeing on what terms mean, or which ones are relevant, it's pretty clear that DevOps will remain a field that's contested and debated. But perhaps this is important - if we expect it to simply be a solution to the engineering challenges we face, it's already failed as a concept. However, if we understand it as a framework or mindset for solving problems then that is when it acquires greater potency. Viktor Farcic is a Developer Advocate at CloudBees, a member of the Google Developer Experts and Docker Captains groups, and published author. His big passions are DevOps, Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD).
Read more
  • 0
  • 0
  • 46702

article-image-iclr-2019-highlights-algorithmic-fairness-ai-for-social-good-climate-change-protein-structures-gan-magic-adversarial-ml-and-much-more
Amrata Joshi
09 May 2019
7 min read
Save for later

ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more

Amrata Joshi
09 May 2019
7 min read
The ongoing ICLR 2019 (International Conference on Learning Representations) has brought a pack full of surprises and key specimens of innovation. The conference started on Monday, this week and it’s already the last day today! This article covers the highlights of ICLR 2019 and introduces you to the ongoing research carried out by experts in the field of deep learning, data science, computational biology, machine vision, speech recognition, text understanding, robotics and much more. The team behind ICLR 2019, invited papers based on Unsupervised objectives for agents, Curiosity and intrinsic motivation, Few shot reinforcement learning, Model-based planning and exploration, Representation learning for planning, Learning unsupervised goal spaces, Unsupervised skill discovery and Evaluation of unsupervised agents. https://twitter.com/alfcnz/status/1125399067490684928 ICLR 2019, sponsored by Google marks the presence of 200 researchers contributing to and learning from the academic research community by presenting papers and posters. ICLR 2019 Day 1 highlights: Neural network, Algorithmic fairness, AI for social good and much more Algorithmic fairness https://twitter.com/HanieSedghi/status/1125401294880083968 The first day of the conference started with a talk on Highlights of Recent Developments in Algorithmic Fairness by Cynthia Dwork, an American computer scientist at Harvard University. She focused on "group fairness" notions that address the relative treatment of different demographic groups. And she talked on research in the ML community that explores fairness via representations. The investigation of scoring, classifying, ranking, and auditing fairness was also discussed in this talk by Dwork. Generating high fidelity images with Subscale Pixel Networks and Multidimensional Upscaling https://twitter.com/NalKalchbrenner/status/1125455415553208321 Jacob Menick, a senior research engineer at Google, Deep Mind and Nal Kalchbrenner, staff research scientist and co-creator of the Google Brain Amsterdam research lab talked on Generating high fidelity images with Subscale Pixel Networks and Multidimensional Upscaling. They talked about the challenges involved in generating the images and how they address this issue with the help of Subscale Pixel Network (SPN). It is a conditional decoder architecture that helps in generating an image as a sequence of image slices of equal size. They also explained how Multidimensional Upscaling is used to grow an image in both size and depth via intermediate stages corresponding to distinct SPNs. There were in all 10 workshops conducted on the same day based on AI and deep learning covering topics such as, The 2nd Learning from Limited Labeled Data (LLD) Workshop: Representation Learning for Weak Supervision and Beyond Deep Reinforcement Learning Meets Structured Prediction AI for Social Good Debugging Machine Learning Models The first day also witnessed a few interesting talks on neural networks covering topics such as The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks, How Powerful are Graph Neural Networks? etc. Overall the first day was quite enriching and informative. ICLR 2019 Day 2 highlights: AI in climate change, Protein structure, adversarial machine learning, CNN models and much more AI’s role in climate change https://twitter.com/natanielruizg/status/1125763990158807040 Tuesday, also the second day of the conference, started with an interesting talk on Can Machine Learning Help to Conduct a Planetary Healthcheck? by Emily Shuckburgh, a Climate scientist and deputy head of the Polar Oceans team at the British Antarctic Survey. She talked about the sophisticated numerical models of the Earth’s systems which have been developed so far based on physics, chemistry and biology. She then highlighted a set of "grand challenge" problems and discussed various ways in which Machine Learning is helping to advance our capacity to address these. Protein structure with a differentiable simulator On the second day of ICLR 2019, Chris Sander, computational biologist, John Ingraham, Adam J Riesselman, and Debora Marks from Harvard University, talked on Learning protein structure with a differentiable simulator. They about the protein folding problem and their aim to bridge the gap between the expressive capacity of energy functions and the practical capabilities of their simulators by using an unrolled Monte Carlo simulation as a model for data. They also composed a neural energy function with a novel and efficient simulator which is based on Langevin dynamics for building an end-to-end-differentiable model of atomic protein structure given amino acid sequence information. They also discussed certain techniques for stabilizing backpropagation and demonstrated the model's capacity to make multimodal predictions. Adversarial Machine Learning https://twitter.com/natanielruizg/status/1125859734744117249 Day 2 was long and had Ian Goodfellow, a machine learning researcher and inventor of GANs, to talk on Adversarial Machine Learning. He talked about supervised learning works and making machine learning private, getting machine learning to work for new tasks and also reducing the dependency on large amounts of labeled data. He then discussed how the adversarial techniques in machine learning are involved in the latest research frontiers. Day 2 covered poster presentation and a few talks on Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset,  Learning to Remember More with Less Memorization, Learning to Remember More with Less Memorization, etc. ICLR 2019 Day 3 highlights: GAN, Autonomous learning and much more Developmental autonomous learning: AI, Cognitive Sciences and Educational Technology https://twitter.com/drew_jaegle/status/1125522499150721025 Day 3 of ICLR 2019 started with Pierre-Yves Oudeyer’s, research director at Inria talk on Developmental Autonomous Learning: AI, Cognitive Sciences and Educational Technology. He presented a research program that focuses on computational modeling of child development and learning mechanisms. He then discussed the several developmental forces that guide exploration in large real-world spaces. He also talked about the models of curiosity-driven autonomous learning that enables machines to sample and explore their own goals and learning strategies. He then explained how these models and techniques can be successfully applied in the domain of educational technologies. Generating knockoffs for feature selection using Generative Adversarial Networks (GAN) Another interesting topic on the third day of ICLR 2019 was Generating knockoffs for feature selection using Generative Adversarial Networks (GAN) by James Jordon from Oxford University, Jinsung Yoon from California University, and Mihaela Schaar Professor at UCLA. The experts talked about the Generative Adversarial Networks framework that helps in generating knockoffs with no assumptions on the feature distribution. They also talked about the model they created which consists of 4 networks, a generator, a discriminator, a stability network and a power network. They further demonstrated the capability of their model to perform feature selection. Followed by few more interesting topics like Deterministic Variational Inference for Robust Bayesian Neural Networks, there were series of poster presentations. ICLR 2019 Day 4 highlights: Neural networks, RNN, neuro-symbolic concepts and much more Learning natural language interfaces with neural models Today’s focus was more on neural models and neuro symbolic concepts. The day started with a talk on Learning natural language interfaces with neural models by Mirella Lapata, a computer scientist. She gave an overview of recent progress on learning natural language interfaces which allow users to interact with various devices and services using everyday language. She also addressed the structured prediction problem of mapping natural language utterances onto machine-interpretable representations. She further outlined the various challenges it poses and described a general modeling framework based on neural networks which tackle these challenges. Ordered neurons: Integrating tree structures into Recurrent Neural Networks https://twitter.com/mmjb86/status/1126272417444311041 The next interesting talk was on Ordered neurons: Integrating tree structures into Recurrent Neural Networks by Professors Yikang Shen, Aaron Courville and Shawn Tan from Montreal University, and, Alessandro Sordoni, a researcher at Microsoft. In this talk, the experts focused on how they proposed a new RNN unit: ON-LSTM, which achieves good performance on four different tasks including language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference. The last day of ICLR 2019 was exciting and helped the researchers present their innovations and attendees got a chance to interact with the experts. To have a complete overview of each of these sessions, you can head over to ICLR’s Facebook page. Paper in Two minutes: A novel method for resource efficient image classification Google I/O 2019 D1 highlights: smarter display, search feature with AR capabilities, Android Q, linguistically advanced Google lens and more Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop
Read more
  • 0
  • 0
  • 33272
article-image-deepmind-alphago-zero-game-changer-for-ai-research
Guest Contributor
09 May 2019
10 min read
Save for later

Why DeepMind AlphaGo Zero is a game changer for AI research

Guest Contributor
09 May 2019
10 min read
DeepMind, a London based artificial intelligence (AI) company currently owned by Alphabet, recently made great strides in AI with its AlphaGo program. It all began in October 2015 when the program beat the European Go champion Fan Hui 5-0, in a game of Go. This was the very first time an AI defeated a professional Go player. Earlier, computers were only known to have played Go at the "amateur" level. Then, the company made headlines again in 2016 after its AlphaGo program beat Lee Sedol, a professional Go player (a world champion) with a score of 4-1 in a five-game match. Furthermore, in late 2017, an improved version of the program called AlphaGo Zero defeated AlphaGo 100 games to 0. The best part? AlphaGo Zero's strategies were self-taught i.e it was trained without any data from human games. AlphaGo Zero was able to defeat its predecessor in only three days time with lesser processing power than AlphaGo. However, the original AlphaGo, on the other hand required months to learn how to play. All these facts beg the questions: what makes AlphaGo Zero so exceptional? Why is it such a big deal? How does it even work? So, without further ado, let’s dive into the what, why, and how of DeepMind’s AlphaGo Zero. What is DeepMind AlphaGo Zero? Simply put, AlphaGo Zero is the strongest Go program in the world (with the exception of AlphaZero). As mentioned before, it monumentally outperforms all previous versions of AlphaGo. Just check out the graph below which compares the Elo rating of the different versions of AlphaGo. Source: DeepMind The Elo rating system is a method for calculating the relative skill levels of players in zero-sum games such as chess and Go. It is named after its creator Arpad Elo, a Hungarian-American physics professor. Now, all previous versions of AlphaGo were trained using human data. The previous versions learned and improved upon the moves played by human experts/professional Go players. But AlphaGo Zero didn’t use any human data whatsoever. Instead, it had to learn completely from playing against itself. According to DeepMind's Professor David Silver, the reason that playing against itself enables it to do so much better than using strong human data is that AlphaGo always has an opponent of just the right level. So it starts off extremely naive, with perfectly random play. And yet at every step of the learning process, it has an opponent (a “sparring partner”) that’s exactly calibrated to its current level of performance. That is, to begin with, these players are terribly weak but over time they become progressively stronger and stronger. Why is reinforcement learning such a big deal? People tend to assume that machine learning is all about big data and massive amounts of computation. But actually, with AlphaGo Zero, AI scientists at DeepMind realized that algorithms matter much more than the computing processing power or data availability. AlphaGo Zero required less computation than previous versions and yet it was able to perform at a much higher level due to using much more principled algorithms than before. It is a system which is trained completely from scratch, starting from random behavior, and progressing from first principles to really discover tabula rasa, in playing the game of Go. It is, therefore, no longer constrained by the limits of human knowledge. Note that AlphaGo Zero did not use zero-shot learning which essentially is the ability of the machine to solve a task despite not having received any training for that task. How does it work? AlphaGo Zero is able to achieve all this by employing a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. As explained previously, the system starts off with a single neural network that knows absolutely nothing about the game of Go. By combining this neural network with a powerful search algorithm, it then plays games against itself. As it plays more and more games, the neural network is updated and tuned to predict moves, and even the eventual winner of the games. This revised neural network is then recombined with the search algorithm to generate a new, stronger version of AlphaGo Zero, and the process repeats. With each iteration, the performance of the system enhances with each iteration, and the quality of the self-play games’ advances, leading to increasingly accurate neural networks and ever-more powerful versions of AlphaGo Zero. Now, let’s dive into some of the technical details that make this version of AlphaGo so much better than all its forerunners. AlphaGo Zero's neural network was trained using TensorFlow, with 64 GPU workers and 19 CPU parameter servers. Only four Tensor Processing Units (TPUs) were used for inference. And of course, the neural network initially knew nothing about Go beyond the rules. Both AlphaGo and AlphaGo Zero took a general approach to play Go. Both evaluated the Go board and chose moves using a combination of two methods: Conducting a “lookahead” search: This means looking ahead several moves by simulating games, and hence seeing which current move is most likely to lead to a “good” position in the future. Assessing positions based on an “intuition” of whether a position is “good” or “bad”  and is likely to result in a win or a loss. Go is a truly intricate game which means computers can’t merely search all possible moves using a brute force approach to discover the best one. Method 1: Lookahead Before AlphaGo, all the finest Go programs tackled this issue by using “Monte Carlo Tree Search” or MCTS. This process involves initially exploring numerous possible moves on the board and then focusing this search over time as certain moves are found to be more likely to result in wins than others. Source: LOC Both AlphaGo and AlphaGo Zero apply a fairly elementary version of MCTS for their “lookahead” to correctly maintain the tradeoff between exploring new sequences of moves or more deeply explore already-explored sequences. Although MCTS has been at the heart of all effective Go programs preceding AlphaGo, it was DeepMind’s smart coalescence of this method with a neural network-based “intuition” that enabled it to attain superhuman performance. Method 2: Intuition DeepMind’s pivotal innovation with AlphaGo was to utilize deep neural networks to identify the state of the game and then use this knowledge to effectively guide the search of the MCTS. In particular, they trained networks that could record: The current board position Which player was playing The sequence of recent moves (in order to rule out certain moves as “illegal”) With this data, the neural networks could propose: Which move should be played If the current player is likely to win or not So how did DeepMind train neural networks to do this? Well, AlphaGo and AlphaGo Zero used rather different approaches in this case. AlphaGo had two separately trained neural networks: Policy Network and Value Network. Source: AlphaGo’s Nature Paper DeepMind then fused these two neural networks with MCTS  —  that is, the program’s “intuition” with its brute force “lookahead” search — in an ingenious way. It used the network that had been trained to predict: Moves to guide which branches of the game tree to search Whether a position was “winning” to assess the positions it encountered during its search This let AlphaGo to intelligently search imminent moves and eventually beat the world champion Lee Sedol. AlphaGo Zero, however, took this principle to the next level. Its neural network’s “intuition” was trained entirely differently from that of AlphaGo. More specifically: The neural network was trained to play moves that exhibited the improved evaluations from performing the “lookahead” search The neural network was tweaked so that it was more likely to play moves like those that led to wins and less likely to play moves similar to those that led to losses during the self-play games Much was made of the fact that no games between humans were used to train AlphaGo Zero. Thus, for a given state of a Go agent, it can constantly be made smarter by performing MCTS-based lookahead and using the results of that lookahead to upgrade the agent. This is how AlphaGo Zero was able to perpetually improve, from when it was an “amateur” all the way up to when it better than the best human players. Moreover, AlphaGo Zero’s neural network architecture can be referred to as a “two-headed” architecture. Source: Hacker Noon Its first 20 layers were “blocks” of a typically seen in modern neural net architectures. These layers were followed by two “heads”: One head that took the output of the first 20 layers and presented probabilities of the Go agent making certain moves Another head that took the output of the first 20 layers and generated a probability of the current player winning. What’s more, AlphaGo Zero used a more “state of the art” neural network architecture as opposed to AlphaGo. Particularly, it used a “residual” neural network architecture rather than a plainly “convolutional” architecture. Deep residual learning was pioneered by Microsoft Research in late 2015, right around the time work on the first version of AlphaGo would have been concluded. So, it is quite reasonable that DeepMind did not use them in the initial AlphaGo program. Notably, each of these two neural network-related acts —  switching from separate-convolutional to the more advanced dual-residual architecture and using the “two-headed” neural network architecture instead of separate neural networks  —  would have resulted in nearly half of the increase in playing strength as was realized when both were coupled. Source: AlphaGo’s Nature Paper Wrapping it up According to DeepMind: “After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo - which had itself defeated 18-time world champion Lee Sedol - by 100 games to 0. After 40 days of self-training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world's best players and world number one Ke Jie. Over the course of millions of AlphaGo vs AlphaGo games, the system progressively learned the game of Go from scratch, accumulating thousands of years of human knowledge during a period of just a few days. AlphaGo Zero also discovered new knowledge, developing unconventional strategies and creative new moves that echoed and surpassed the novel techniques it played in the games against Lee Sedol and Ke Jie.” Further, the founder and CEO of DeepMind, Dr. Demis Hassabis believes AlphaGo's algorithms are likely to most benefit to areas that need an intelligent search through an immense space of possibilities. Author Bio Gaurav is a Senior SEO and Content Marketing Analyst at The 20 Media, a Content Marketing agency that specializes in data-driven SEO. He has more than seven years of experience in Digital Marketing and along with that loves to read and write about AI, Machine Learning, Data Science and much more about the emerging technologies. In his spare time, he enjoys watching movies and listening to music. Connect with him on Twitter and LinkedIn. DeepMind researchers provide theoretical analysis on recommender system, ‘echo chamber’ and ‘filter bubble effect’ What if AIs could collaborate using human-like values? DeepMind researchers propose a Hanabi platform. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers  
Read more
  • 0
  • 0
  • 35304

article-image-which-python-framework-is-best-for-building-restful-apis-django-or-flask
Vincy Davis
07 May 2019
9 min read
Save for later

Which Python framework is best for building RESTful APIs? Django or Flask?

Vincy Davis
07 May 2019
9 min read
Python is one of the top-rated programming languages. It's also known for its less-complex syntax, and its high-level, object-oriented, robust, and general-purpose programming. Python is the top choice for any first-time programmer. Since its release in 1991, Python has evolved and powered by several frameworks for web application development, scientific and mathematical computing, and graphical user interfaces to the latest REST API frameworks. This article is an excerpt taken from the book, 'Hands-On RESTful API Design Patterns and Best Practices' written by Harihara Subramanian and Pethura Raj. This book covers design strategy, essential and advanced Restful API Patterns, Legacy Modernization to Microservices centric apps. In this article, we'll explore two comprehensive frameworks, Django and Flask, so that you can choose the best one for developing your RESTful API. Django Django is a web framework also available as open source with the BSD license, designed to help developers create their web app very quickly as it takes care of additional web-development needs. It includes several packages (also known as applications) to handle typical web-development tasks, such as authentication, content administration, scaffolding, templates, caching, and syndication. Let's use the Django REST Framework (DRF) built with Python, and use it for REST API development and deployment. Django Rest Framework DRF is an open source, well-matured Python and Django library intended to help APP developers build sophisticated web APIs. DRF's modular, flexible, and customizable architecture makes the development of both simple, turnkey API endpoints and complicated REST constructs possible. The goal of DRF is to divide a model, generalize the wire representation, such as JSON or XML, and customize a set of class-based views to satisfy the specific API endpoint using a serializer that describes the mapping between views and API endpoints. Core features Django has many distinct features including: Web-browsable API This feature enhances the REST API developed with DRF. It has a rich interface, and the web-browsable API supports multiple media types too. The browsable API does mean that the APIs we build will be self-describing and the API endpoints that we create as part of the REST services and return JSON or HTML representations. The interesting fact about the web-browsable API is that we can interact with it fully through the browser, and any endpoint that we interact with using a programmatic client will also be capable of responding with a browser-friendly view onto the web-browsable API. Authentication One of the main attractive features of Django is authentication; it supports broad categories of authentication schemes, from basic authentication, token authentication, session authentication, remote user authentication, to OAuth Authentication. It also supports custom authentication schemes if we wish to implement one. DRF runs the authentication scheme at the start of the view, that is, before any other code is allowed to proceed. DRF determines the privileges of the incoming request from the permission and throttling policies and then decides whether the incoming request can be allowed or disallowed with the matched credentials. Serialization and deserialization Serialization is the process of converting complex data, such as querysets and model instances, into native Python datatypes. Converting facilitates the rendering of native data types, such as JSON or XML. DRF supports serialization through serializers classes. The serializers of DRF are similar to Django's Form and ModelForm classes. It provides a serializer class, which helps to control the output of responses. The DRF ModelSerializer classes provide a simple mechanism with which we can create serializers that deal with model instances and querysets. Serializers also do deserialization, that is, serializers allow parsed data that needs to be converted back into complex types. Also, deserialization happens only after validating the incoming data. Other noteworthy features Here are some other noteworthy features of the DRF: Routers: The DRF supports automatic URL routing to Django and provides a consistent and straightforward way to wire the view logic to a set of URLs Class-based views: A dominant pattern that enables the reusability of common functionalities Hyperlinking APIs: The DRF supports various styles (using primary keys, hyperlinking between entities, and so on) to represent the relationship between entities Generic views: Allows us to build API views that map to the database models DRF has many other features such as caching, throttling, testing, etc. Benefits of the DRF Here are some of the benefits of the DRF: Web-browsable API Authentication policies Powerful serialization Extensive documentation and excellent community support Simple yet powerful Test coverage of source code Secure and scalable Customizable Drawbacks of the DRF Here are some facts that may disappoint some Python app developers who intend to use the DRF: Monolithic and components get deployed together Based on Django ORM Steep learning curve Slow response time Flask Flask is a microframework for Python developers based on Werkzeug (WSGI toolkit) and Jinja 2 (template engine). It comes under BSD licensing. Flask is very easy to set up and simple to use. Like other frameworks, it comes with several out-of-the-box capabilities, such as a built-in development server, debugger, unit test support, templating, secure cookies, and RESTful request dispatching. The powerful Flask  RESTful API framework is discussed below. Flask-RESTful Flask-RESTful is an extension for Flask that provides additional support for building REST APIs. You will never be disappointed with the time it takes to develop an API. Flask-Restful is a lightweight abstraction that works with the existing ORM/libraries. Flask-RESTful encourages best practices with minimal setup. Core features of Flask-RESTful Flask-RESTful comes with several built-in features. Django and Flask have many common RESTful frameworks, because they have almost the same supporting core features. The unique RESTful features of Flask is mentioned below. Resourceful routing The design goal of Flask-RESTful is to provide resources built on top of Flask pluggable views. The pluggable views provide a simple way to access the HTTP methods. Consider the following example code: class Todo(Resource): def get(self, user_id): .... def delete(self, user_id): .... def put(self, user_id): args = parser.parse_args() .... Restful request parsing Request parsing refers to an interface, modeled after the Python parser interface for command-line arguments, called argparser. The RESTful request parser is designed to provide uniform and straightforward access to any variable that comes within the (flask.request) request object. Output fields In most cases, app developers prefer to control rendering response data, and Flask-RESTful provides a mechanism where you can use ORM models or even custom classes as an object to render. Another interesting fact about this framework is that app developers don't need to worry about exposing any internal data structures as its let one format and filter the response objects. So, when we look at the code, it'll be evident which data would go for rendering and how it'll be formatted. Other noteworthy features Here are some other noteworthy features of Flask-RESTful: API: This is the main entry point for the restful API, which we'll initialize with the Flask application. ReqParse: This enables us to add and parse multiple arguments in the context of the single request. Input: A useful functionality, it parses the input string and returns true or false depending on the Input. If the input is from the JSON body,  the type is already native Boolean and passed through without further parsing. Benefits of the Flask framework Here are some of the benefits of Flask framework: Built-in development server and debugger Out-of-the-box RESTful request dispatching Support for secure cookies Integrated unit-test support Lightweight Very minimal setup Faster (performance) Easy NoSQL integration Extensive documentation Drawbacks of Flask Here are some of Flask and Flask-RESTful's disadvantages: Version management (managed by developers) No brownie points as it doesn't have browsable APIs May incur a steep learning curve Frameworks – a table of reference The following table provides a quick reference of a few other prominent micro-frameworks, their features, and supported programming languages: Language Framework Short description Prominent features Java Blade Fast and elegant MVC framework for Java8 Lightweight High performance Based on the MVC pattern RESTful-style router interface Built-in security Java/Scala Play Framework High-velocity Reactive web framework for Java and Scala Lightweight, stateless, and web-friendly architecture Built on Akka Supports predictable and minimal resource-consumption for highly-scalable applications Developer-friendly Java Ninja Web Framework Full-stack web framework Fast Developer-friendly Rapid prototyping Plain vanilla Java, dependency injection, first-class IDE integration Simple and fast to test (mocked tests/integration tests) Excellent build and CI support Clean codebase – easy to extend Java RESTEASY JBoss-based implementation that integrates several frameworks to help to build RESTful Web and Java applications Fast and reliable Large community Enterprise-ready Security support Java RESTLET A lightweight and comprehensive framework based on Java, suitable for both server and client applications. Lightweight Large community Native REST support Connectors set JavaScript Express.js Minimal and flexible Node.js-based JavaScript framework for mobile and web applications HTTP utility methods Security updates Templating engine PHP Laravel An open source web-app builder based on PHP and the MVC architecture pattern Intuitive interface Blade template engine Eloquent ORM as default Elixir Phoenix (Elixir) Powered with the Elixir functional language, a reliable and faster micro-framework MVC-based High application performance Erlong virtual machine enables better use of resources Python Pyramid Python-based micro-framework Lightweight Function decorators Events and subscribers support Easy implementations and high productivity Summary It's evident that Python has two excellent frameworks. Depending on the choice of programming language you are intending to use and the required features, you can choose your type of framework to work on. If you are interested in learning more about the design strategy, guidelines and best practices of Restful API Patterns, you can refer to our book 'Hands-On RESTful API Design Patterns and Best Practices' here. Stack Overflow survey data further confirms Python’s popularity as it moves above Java in the most used programming language list. Svelte 3 releases with reactivity through language instead of an API Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript
Read more
  • 0
  • 0
  • 85386
Modal Close icon
Modal Close icon