Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7009 Articles
article-image-defining-rest-and-its-various-architectural-styles
Sugandha Lahoti
11 Jul 2019
9 min read
Save for later

Defining REST and its various architectural styles

Sugandha Lahoti
11 Jul 2019
9 min read
RESTful web services are services built according to REST principles. The idea is to have them designed to essentially work well on the web. But, what is REST? Let's start from the beginning by defining REST. This article is taken from the book Hands-On RESTful Web Services with TypeScript 3 by Biharck Muniz Araújo. This book is a  step-by-step guide that will help you design, develop, scale, and deploy RESTful APIs with TypeScript 3 and Node.js. In this article we will learn what is REST and talk about various REST architectural styles. What is REST? The REST (Representational State Transfer) style is a set of software engineering practices that contains constraints that should be used in order to create web services in distributed hypermedia systems. REST is not a tool and neither is it a language; in fact, REST is agnostic of protocols, components, and languages. It is important to say that REST is an architectural style and not a toolkit. REST provides a set of design rules in order to create stateless services that are shown as resources and, in some cases, sources of specific information such as data and functionality. The identification of each resource is performed by its unique Uniform Resource Identifier (URI). REST describes simple interfaces that transmit data over a standardized interface such as HTTP and HTTPS without any additional messaging layer, such as Simple Object Access Protocol (SOAP). The consumer will access REST resources via a URI using HTTP methods (this will be explained in more detail later). After the request, it is expected that a representation of the requested resource is returned. The representation of any resource is, in general, a document that reflects the current or intended state of the requested resource. REST architectural styles The REST architectural style describes six constraints. These constraints were originally described by Roy Fielding in his Ph.D. thesis. They include the following: Uniform interface Stateless Cacheable Client-server architecture A layered system Code on demand (optional) We will discuss them all minutely in the following subsections. Uniform interface Uniform interface is a constraint that describes a contract between clients and servers. One of the reasons to create an interface between them is to allow each part to evolve regardless of each other. Once there is a contract aligned with the client and server parts, they can start their works independently because, at the end of the day, the way that they will communicate is firmly based on the interface: The uniform interface is divided into four main groups, called principles: Resource-based The manipulation of resources using representations Self-descriptive messages Hypermedia as the Engine of Application State (HATEOAS) Let's talk more about them. Resource-based One of the key things when a resource is being modeled is the URI definition. The URI is what defines a resource as unique. This representation is what will be returned for clients. If you decided to perform GET to the offer URI, the resource that returns should be a resource representing an order containing the ID order, creation date, and so on. The representation should be in JSON or XML. Here is a JSON example: { id : 1234, creation-date : "1937-01-01T12:00:27.87+00:20", any-other-json-fields... } Here is an XML example: <order> <id>1234</id> <creation-date>1937-01-01T12:00:27.87+00:20</creation-date> any-other-xml-fields </order> The manipulation of resources using representations Following the happy path, when the client makes a request to the server, the server responds with a resource that represents the current state of its resource. This resource can be manipulated by the client. The client can request what kind it desires for the representation such as JSON, XML, or plain text. When the client needs to specify the representation, the HTTP Accept header is used. Here you can see an example in plain text: GET https://<HOST>/orders/12345 Accept: text/plain The next one is in JSON format: GET https://<HOST>/orders/12345 Accept: application/json Self-descriptive messages In general, the information provided by the RESTful service contains all the information about the resource that the client should be aware of. There is also a possibility of including more information than the resource itself. This information can be included as a link. In HTTP, it is used as the content-type header and the agreement needs to be bilateral—that is, the requestor needs to state the media type that it's waiting for and the receiver must agree about what the media type refers to. Some examples of media types are listed in the following table: Extension Document Type MIME type .aac AAC audio file audio/aac .arc Archive document application/octet-stream .avi Audio Video Interleave (AVI) video/x-msvideo .css Cascading Style Sheets (CSS) text/css .csv Comma-separated values (CSV) text/csv .doc Microsoft Word application/msword .epub Electronic publication (EPUB) application/epub+zip .gif Graphics Interchange Format (GIF) image/gif .html HyperText Markup Language (HTML) text/html .ico Icon format image/x-icon .ics iCalendar format text/calendar .jar Java Archive (JAR) application/java-archive .jpeg JPEG images image/jpeg .js JavaScript (ECMAScript) application/javascript .json JSON format application/json .mpeg MPEG video video/mpeg .mpkg Apple Installer Package application/vnd.apple.installer+xml .odt OpenDocument text document application/vnd.oasis.opendocument.text .oga OGG audio audio/ogg .ogv OGG video video/ogg .ogx OGG application/ogg .otf OpenType font font/otf .png Portable Network Graphics image/png .pdf Adobe Portable Document Format (PDF) application/pdf .ppt Microsoft PowerPoint application/vnd.ms-powerpoint .rar RAR archive application/x-rar-compressed .rtf Rich Text Format (RTF) application/rtf .sh Bourne shell script application/x-sh .svg Scalable Vector Graphics (SVG) image/svg+xml .tar Tape Archive (TAR) application/x-tar .ts TypeScript file application/typescript .ttf TrueType Font font/ttf .vsd Microsoft Visio application/vnd.visio .wav Waveform Audio Format audio/x-wav .zip ZIP archive application/zip .7z 7-zip archive application/x-7z-compressed There is also a possibility of creating custom media types. A complete list can be found here. HATEOAS HATEOAS is a way that the client can interact with the response by navigating within it through the hierarchy in order to get complementary information. For example, here the client makes a GET call to the order URI : GET https://<HOST>/orders/1234 The response comes with a navigation link to the items within the 1234 order, as in the following code block: { id : 1234, any-other-json-fields..., links": [ { "href": "1234/items", "rel": "items", "type" : "GET" } ] } What happens here is that the link fields allow the client to navigate until 1234/items in order to see all the items that belong to the 1234 order. Stateless Essentially, stateless means that the necessary state during the request is contained within the request and it is not persisted in any hypothesis that could be recovered further. Basically, the URI is the unique identifier to the destination and the body contains the state or changeable state, or the resource. In other words, after the server handles the request, the state could change and it will send back to the requestor with the appropriate HTTP status code: In comparison to the default session scope found in a lot of existing systems, the REST client must be the one that is responsible in providing all necessary information to the server, considering that the server should be idempotent. Stateless allows high scalability since the server will not maintain sessions. Another interesting point to note is that the load balancer does not care about sessions at all in stateless systems. In other words, the client needs to always pass the whole request in order to get the resource because the server is not allowed to hold any previous request state. Cacheable The aim of caching is to never have to generate the same response more than once. The key benefits of using this strategy are an increase in speed and a reduction in server processing. Essentially, the request flows through a cache or a series of caches, such as local caching, proxy caching, or reverse proxy caching, in front of the service hosting the resource. If any of them match with any criteria during the request (for example, the timestamp or client ID), the data is returned based on the cache layer, and if the caches cannot satisfy the request, the request goes to the server: Client-server architecture The REST style separates clients from a server. In short, whenever it is necessary to replace either the server or client side, things should flow naturally since there is no coupling between them. The client side should not care about data storage and the server side should not care about the interface at all: A layered system Each layer must work independently and interact only with the layers directly connected to it. This strategy allows passing the request without bypassing other layers. For instance, when scaling a service is desired, you might use a proxy working as a load balancer—that way, the incoming requests are deliverable to the appropriate server instance. That being the case, the client side does not need to understand how the server is going to work; it just makes requests to the same URI. The cache is another example that behaves in another layer, and the client does not need to understand how it works either: Code on demand In summary, this optional pattern allows the client to download and execute code from the server on the client side. The constraint says that this strategy improves scalability since the code can execute independently of the server on the client side: In this post, we discussed various REST architectural styles based on six constraints. To know more about best practices for RESTful design such as API endpoint organization, different ways to expose an API service, how to handle large datasets, check out the book Hands-On RESTful Web Services with TypeScript 3. 7 reasons to choose GraphQL APIs over REST for building your APIs Which Python framework is best for building RESTful APIs? Django or Flask? Understanding advanced patterns in RESTful API [Tutorial]
Read more
  • 0
  • 0
  • 16550

article-image-british-airways-set-to-face-a-record-breaking-fine-of-183m-by-the-ico-over-customer-data-breach
Sugandha Lahoti
08 Jul 2019
6 min read
Save for later

British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach

Sugandha Lahoti
08 Jul 2019
6 min read
UK’s watchdog ICO is all set to fine British Airways more than £183m over a customer data breach. In September last year, British Airways notified ICO about a data breach that compromised personal identification information of over 500,000 customers and is believed to have begun in June 2018. ICO said in a statement, “Following an extensive investigation, the ICO has issued a notice of its intention to fine British Airways £183.39M for infringements of the General Data Protection Regulation (GDPR).” Information Commissioner Elizabeth Denham said, "People's personal data is just that - personal. When an organisation fails to protect it from loss, damage or theft, it is more than an inconvenience. That's why the law is clear - when you are entrusted with personal data, you must look after it. Those that don't will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights." How did the data breach occur? According to the details provided by the British Airways website, payments through its main website and mobile app were affected from 22:58 BST August 21, 2018, until 21:45 BST September 5, 2018. Per ICO’s investigation, user traffic from the British Airways site was being directed to a fraudulent site from where customer details were harvested by the attackers. Personal information compromised included log in, payment card, and travel booking details as well name and address information. The fraudulent site performed what is known as a supply chain attack embedding code from third-party suppliers to run payment authorisation, present ads or allow users to log into external services, etc. According to a cyber-security expert, Prof Alan Woodward at the University of Surrey, the British Airways hack may possibly have been a company insider who tampered with the website and app's code for malicious purposes. He also pointed out that live data was harvested on the site rather than stored data. https://twitter.com/EerkeBoiten/status/1148130739642413056 RiskIQ, a cyber security company based in San Francisco, linked the British Airways attack with the modus operandi of a threat group Magecart. Magecart injects scripts designed to steal sensitive data that consumers enter into online payment forms on e-commerce websites directly or through compromised third-party suppliers. Per RiskIQ, Magecart set up custom, targeted infrastructure to blend in with the British Airways website specifically and to avoid detection for as long as possible. What happens next for British Airways? The ICO noted that British Airways cooperated with its investigation, and has made security improvements since the breach was discovered. They now have 28 days to appeal. Responding to the news, British Airways’ chairman and chief executive Alex Cruz said that the company was “surprised and disappointed” by the ICO’s decision, and added that the company has found no evidence of fraudulent activity on accounts linked to the breach. He said, "British Airways responded quickly to a criminal act to steal customers' data. We have found no evidence of fraud/fraudulent activity on accounts linked to the theft. We apologise to our customers for any inconvenience this event caused." ICO was appointed as the lead supervisory authority to tackle this case on behalf of other EU Member State data protection authorities. Under the GDPR ‘one stop shop’ provisions the data protection authorities in the EU whose residents have been affected will also have the chance to comment on the ICO’s findings. The penalty is divided up between the other European data authorities, while the money that comes to the ICO goes directly to the Treasury. What is somewhat surprising is that ICO disclosed the fine publicly even before Supervisory Authorities commented on ICOs findings and a final decision has been taken based on their feedback, as pointed by Simon Hania. https://twitter.com/simonhania/status/1148145570961399808 Record breaking fine appreciated by experts The penalty imposed on British Airways is the first one to be made public since GDPR’s new policies about data privacy were introduced. GDPR makes it mandatory to report data security breaches to the information commissioner. They also increased the maximum penalty to 4% of turnover of the penalized company. The fine would be the largest the ICO has ever issued; last ICO fined Facebook £500,000 fine for the Cambridge Analytica scandal, which was the maximum under the 1998 Data Protection Act. The British Airways penalty amounts to 1.5% of its worldwide turnover in 2017, making it roughly 367 times than of Facebook’s. Infact, it could have been even worse if the maximum penalty was levied;  the full 4% of turnover would have meant a fine approaching £500m. Such a massive fine would clearly send a sudden shudder down the spine of any big corporation responsible for handling cybersecurity - if they compromise customers' data, a severe punishment is in order. https://twitter.com/j_opdenakker/status/1148145361799798785 Carl Gottlieb, Privacy Lead & Data Protection Officer at Duolingo has summarized the factoids of this attack in a twitter thread which were much appreciated. GDPR fines are for inappropriate security as opposed to getting breached. Breaches are a good pointer but are not themselves actionable. So organisations need to implement security that is appropriate for their size, means, risk and need. Security is an organisation's responsibility, whether you host IT yourself, outsource it or rely on someone else not getting hacked. The GDPR has teeth against anyone that messes up security, but clearly action will be greatest where the human impact is most significant. Threats of GDPR fines are what created change in privacy and security practices over the last 2 years (not orgs suddenly growing a conscience). And with very few fines so far, improvements have slowed, this will help. Monetary fines are a great example to change behaviour in others, but a TERRIBLE punishment to drive change in an affected organisation. Other enforcement measures, e.g. ceasing processing personal data (e.g. ban new signups) would be much more impactful. https://twitter.com/CarlGottlieb/status/1148119665257963521 Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content European Union fined Google 1.49 billion euros for antitrust violations in online advertising French data regulator, CNIL imposes a fine of 50M euros against Google for failing to comply with GDPR.
Read more
  • 0
  • 0
  • 28006

article-image-the-road-to-cassandra-4-0-what-does-the-future-have-in-store
Guest Contributor
06 Jul 2019
5 min read
Save for later

The road to Cassandra 4.0 – What does the future have in store?

Guest Contributor
06 Jul 2019
5 min read
In May 2019, DataStax hosted the Accelerate conference for Apache Cassandra™ inviting community members, DataStax customers, and other users to come together, discuss the latest developments around Cassandra, and find out more about the development of Cassandra. Nate McCall, Apache Cassandra Project Chair, presented the road to version 4.0 and what the community is focusing on for the future. So, what does the future really hold for Cassandra? The project has been going for ten years already, so what has to be added?  First off, listening to Nate’s keynote, the approach to development has evolved. As part of the development approach around Cassandra, it’s important to understand who is committing updates to Cassandra. The number of organisations contributing to Cassandra has increased, while the companies involved in the Project Management Committee includes some of the biggest companies in the world.  The likes of Instagram, Facebook and Netflix have team members contributing and leading the development of Cassandra because it is essential to their businesses. For DataStax, we continue to support the growth and development of Cassandra as an open source project through our own code contributions, our development and training, and our drivers that are available for the community and for our customers alike.  Having said all this, there are still areas where Cassandra can improve as we get ready for 4.0. From a development standpoint, the big things to look forward to as mentioned in Nate’s keynote are:  An improved Repair model For a distributed database, being able to carry on through any failure event is critical. After a failure, those nodes will have to be brought back online, and then catch up with the transactions that they missed. Making nodes consistent is a big task, covered by the Repair function. In Cassandra 4.0, the aim is to make Repair smarter. For example, Cassandra can preview the impact of a repair on a host to check that the operation will go through successfully, and specific pull requests for data can also be supported. Alongside this, a new transient replication feature should reduce the cost and bandwidth overhead associated with repair. By replicating temporary copies of data to supplement full copies, the overall cluster should be able to achieve higher levels of availability but at the same time reduce the overall volume of storage required significantly. For companies running very large clusters, the cost savings achievable here could be massive. A Messaging rewrite Efficient messaging between nodes is essential when your database is distributed. Cassandra 4.0 will have a new messaging system in place based on Netty, an asynchronous event-driven network application framework. In practice, using Netty will improve performance of messaging between nodes within clusters and between clusters. On top of this change, zero copy support will provide the ability to improve how quickly data can be streamed between nodes. Zero copy support achieves this by modifying the streaming path to add additional information into the streaming header, and then using ZeroCopy APIs to transfer bytes to and from the network and disk. This allows nodes to transfer large files faster. Cassandra and Kubernetes support Adding new messaging support and being able to transfer SSTables means that Cassandra can add more support for Kubernetes, and for Kubernetes to do interesting things around Cassandra too. One area that has been discussed is around dynamic cluster management, where the number of nodes and the volume of storage can be increased or decreased on demand. Sidecars Sidecars are additional functional tools designed to work alongside a main process. These sidecars fill a gap that is not part of the main application or service, and that should remain separate but linked. For Cassandra, running sidecars allows developers to add more functionality to their operations, such as creating events on an application. Java 11 support Java 11 support has been added to the Cassandra trunk version and will be present in 4.0. This will allow Cassandra users to use Java 11, rather than version 8 which is no longer supported.  Diagnostic events and logging This will make it easier for teams to use events for a range of things, from security requirements through to logging activities and triggering tools.  As part of the conference, there were two big trends that I took from the event. The first is – as Nate commented in his keynote – that there was a definite need for more community events that can bring together people who care about Cassandra and get them working together.   The second is that Apache Cassandra is essential to many companies today. Some of the world’s largest internet companies and most valuable brands out there rely on Cassandra in order to achieve what they do. They are contributors and committers to Cassandra, and they have to be sure that Cassandra is ready to meet their requirements. For everyone using Cassandra, this means that versions have to be ready for use in production rather than having issues to be fixed. Things will get released when they are ready, rather than to meet a particular deadline. And the community will take the lead in ensuring that they are happy with any release.  Cassandra 4.0 is nearing release. It’ll be out when it is ready. Whether you are looking at getting involved with the project through contributions, developing drivers or through writing documentation, there is a warm welcome for everyone in the run up to what should be a great release.  I’m already looking forward to ApacheCon later this year! Author Bio Patrick McFadin is the vice president of developer relations at DataStax, where he leads a team devoted to making users of DataStax products successful. Previously, he was chief evangelist for Apache Cassandra and a consultant for DataStax, where he helped build some of the largest and most exciting deployments in production; a chief architect at Hobsons; and an Oracle DBA and developer for over 15 years.
Read more
  • 0
  • 0
  • 21266

article-image-understanding-the-disambiguation-of-functional-expressions-in-lambda-leftovers-tutorial
Vincy Davis
05 Jul 2019
5 min read
Save for later

Understanding the Disambiguation of functional expressions in Lambda Leftovers [Tutorial]

Vincy Davis
05 Jul 2019
5 min read
Type inference was introduced with Java 5 and has been increasing in coverage ever since. With Java 8, the resolution of overloaded methods was restructured to allow for working with type inference. Before the introduction of lambdas and method references, a call to a method was resolved by checking the types of the arguments that were passed to it (the return type wasn't considered). With Java 8, implicit lambdas and implicit method references couldn't be checked for the types of values that they accepted, leading to restricted compiler capabilities, to rule out ambiguous calls to overloaded methods. However, explicit lambdas and method references could still be checked by their arguments by the compiler. The lambdas that explicitly specify the types of their parameters are termed explicit lambdas. Limiting the compiler's ability and relaxing the rules in this way was purposeful. It lowered the cost of type-checking for lambdas and avoided brittleness.  Lambda Leftovers proposes using an underscore for unused parameters in lambdas, methods, and catch handlers. [box type="shadow" align="" class="" width=""]This article is an excerpt taken from the book, "Java 11 and 12 - New Features", written by Mala Gupta. In this book, you will learn the latest developments in Java, right from variable type inference and simplified multi-threading through to performance improvements, and much more.[/box] In this article, you will understand the existing issues like resolving overloaded methods – passing lambdas, resolving overloaded methods – passing method references and also a proposed solution to define Lambda Leftover parameters Issues with resolving overloaded methods – passing lambdas Let's cover the existing issues with resolving overloaded methods when lambdas are passed as method parameters. Let's define two interfaces, Swimmer and Diver, as follows: interface Swimmer { boolean test(String lap); } interface Diver { String dive(int height); } In the following code, the overloaded evaluate method accepts the interfaces Swimmer and Diver as method parameters: class SwimmingMeet { static void evaluate(Swimmer swimmer) { // code compiles System.out.println("evaluate swimmer"); } static void evaluate(Diver diver) { // code compiles System.out.println("evaluate diver"); } } Let's call the overloaded evaluate() method in the following code: class FunctionalDisambiguation { public static void main(String args[]) { SwimmingMeet.evaluate(a -> false); // This code WON'T compile } } Revisit the lambda from the preceding code: a -> false // this is an implicit lambda Since the preceding lambda expression doesn't specify the type of its input parameter, it could be either String (the test() method and the Swimmer interface) or int (the dive() method and the Diver interface). Since the call to the evaluate() method is ambiguous, it doesn't compile. Let's add the type of the method parameter to the preceding code, making it an explicit lambda: SwimmingMeet.evaluate((String a) -> false); // This compiles!! The preceding call is not ambiguous now; the lambda expression accepts an input parameter of the String type and returns a boolean value, which maps to the evaluate() method which accepts Swimmer as a parameter (the functional test() method in the Swimmer interface accepts a parameter of the String type). Let's see what happens if the Swimmer interface is modified, changing the data type of the lap parameter from String to int. To avoid confusion, all of the code will be repeated, with the modifications in bold: interface Swimmer { // test METHOD IS // MODIFIED boolean test(int lap); // String lap changed to int lap } interface Diver { String dive(int height); } class SwimmingMeet { static void evaluate(Swimmer swimmer) { // code compiles System.out.println("evaluate swimmer"); } static void evaluate(Diver diver) { // code compiles System.out.println("evaluate diver"); } } Consider the following code, thinking about which of the lines of code will compile: 1. SwimmingMeet.evaluate(a -> false); 2. SwimmingMeet.evaluate((int a) -> false); In the preceding example, the code on both of the line numbers won't compile for the same reason—the compiler is unable to determine the call to the overloaded evaluate() method. Since both of the functional methods (that is, test() in the Swimmer interface and dive() in the Diver interface) accept one method parameter of the int type, it isn't feasible for the compiler to determine the method call. As a developer, you might argue that since the return types of test() and dive() are different, the compiler should be able to infer the correct calls. Just to reiterate, the return types of a method don't participate in method overloading. Overloaded methods must return in the count or type of their parameters. Issues with resolving overloaded methods – passing method references Overloaded methods can be defined with different parameter types, as follows: However, the following code doesn't compile: someMethod(Chamionship::reward); // ambiguous call In the preceding line of code, since the compiler is not allowed to examine the method reference, the code fails to compile. This is unfortunate since the method parameters to the overloaded methods are Integer and String—no value can be compatible with both. The proposed solution The accidental compiler issues involved with overloaded methods that use either lambda expressions or method references can be resolved by allowing the compiler to consider their return type as also. The compiler would then be able to choose the right overloaded method and eliminate the unmatched option. Summary For Java developers working with lambdas and method references, this article demonstrates what Java has in the pipeline to help ease problems. Lambda Leftovers plans to allow developers to define lambda parameters that can overshadow variables with the same name in their enclosing block. The disambiguation of functional expressions is an important and powerful feature. It will allow compilers to consider the return types of lambdas in order to determine the right overloaded methods. To know more about the exciting capabilities that are being added to the Java language in pattern matching and switch expressions, head over to the book, Java 11 and 12 - New Features. Using lambda expressions in Java 11 [Tutorial] How to deploy Serverless Applications in Go using AWS Lambda [Tutorial] Java 11 is here with TLS 1.3, Unicode 11, and more update
Read more
  • 0
  • 0
  • 8741

article-image-are-you-looking-at-transitioning-from-being-a-developer-to-manager-here-are-some-leadership-roles-to-consider
Packt Editorial Staff
04 Jul 2019
6 min read
Save for later

Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider

Packt Editorial Staff
04 Jul 2019
6 min read
What does the phrase "a manager" really mean anyway? This phrase means different things to different people and is often overused for the position which nearly matches an analyst-level profile! This term, although common, is worth defining what it really means, especially in the context of software development. This article is an excerpt from the book The Successful Software Manager written by an internationally experienced IT manager, Herman Fung. This book is a comprehensive and practical guide to managing software developers, software customers, and explores the process of deciding what software needs to be built, not how to build it. In this article, we’ll look into aspects you must be aware of before making the move to become a manager in the software industry. A simple distinction I once used to illustrate the difference between an analyst and a manager is that while an analyst identifies, collects, and analyzes information, a manager uses this analysis and makes decisions, or more accurately, is responsible and accountable for the decisions they make. The structure of software companies is now enormously diverse and varies a lot from one to another, which has an obvious impact on how the manager’s role and their responsibilities are defined, which will be unique to each company. Even within the same company, it's subject to change from time to time, as the company itself changes. Broadly speaking, a manager within software development can be classified into three categories, as we will now discuss: Team Leader/Manager This role is often a lead developer who also doubles up as the team spokesperson and single point of contact. They'll typically be the most senior and knowledgeable member of a small group of developers, who work on the same project, product, and technology. There is often a direct link between each developer in the team and their code, which means the team manager has a direct responsibility to ensure the product as a whole works. Usually, the team manager is also asked to fulfill the people management duties, such as performance reviews and appraisals, and day-to-day HR responsibilities. Development/Delivery Manager This person could be either a techie or a non-techie. They will have a good understanding of the requirements, design, code, and end product. They will manage running workshops and huddles to facilitate better overall team working and delivery. This role may include setting up visual aids, such as team/project charts or boards. In a matrix management model, where developers and other experts are temporarily asked to work in project teams, the development manager will not be responsible for HR and people management duties. Project Manager This person is most probably a non-techie, but there are exceptions, and this could be a distinct advantage on certain projects. Most importantly, a project manager will be process-focused and output-driven and will focus on distributing tasks to individuals. They are not expected to jump in to solve technical problems, but they are responsible for ensuring that the proper resources are available, while managing expectations. Specifically, they take part in managing the project budget, timeline, and risks. They should also be aware of the political landscape and management agenda within the organization to be able to navigate through them. The project manager ensures the project follows the required methodology or process framework mandated by the Project Management Office (PMO). They will not have people-management responsibilities for project team members. Agile practitioner As with all roles in today's world of tech, these categories will vary and overlap. They can even be held by the same person, which is becoming an increasingly common trait. They are also constantly evolving, which exemplifies the need to learn and grow continually, regardless of your role or position. If you are a true Agile practitioner, you may have issues in choosing these generalized categories, (Team Leader, Development Manager and Project Manager)  and you'd be right to do so! These categories are most applicable to an organization that practises the traditional Waterfall model. Without diving into the everlasting Waterfall vs Agile debate, let's just say that these are the categories that transcend any methodologies. Even if they're not referred to by these names, they are the roles that need to be performed, to varying degrees, at various times. For completeness, it is worth noting one role specific to Agile, that is being a scrum master. Scrum master A scrum master is a role often compared – rightly or wrongly – with that of the project manager. The key difference is that their focus is on facilitation and coaching, instead of organizing and control. This difference is as much of a mindset as it is a strict practice, and is often referred to as being attributes of Servant Leadership. I believe a good scrum master will show traits of a good project manager at various times, and vice versa. This is especially true in ensuring that there is clear communication at all times and the team stays focused on delivering together. Yet, as we look back at all these roles, it's worth remembering that with the advent of new disciplines such as big data, blockchain, artificial intelligence, and machine learning, there are new categories and opportunities to move from a developer role into a management position, for example, as an algorithm manager or data manager. Transitioning, growing, progressing, or simply changing from a developer to a manager is a wonderfully rewarding journey that is unique to everyone. After clarifying what being a “modern manager" really means, and the broad categories applicable in software development (Team / Development / Project / Agile), the overarching and often key consideration for developers is whether it means they will be managing people and writing less code. In this article, we looked into different leadership roles that are available for developers for their career progression plan. Develop crucial skills to enhance your performance and advance your career with The Successful Software Manager written by Herman Fung. “Developers don’t belong on a pedestal, they’re doing a job like everyone else” – April Wensel on toxic tech culture and Compassionate Coding [Interview] Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” ‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey
Read more
  • 0
  • 0
  • 27962

article-image-is-the-npm-6-9-1-bug-a-symptom-of-the-organizations-cultural-problems
Fatema Patrawala
02 Jul 2019
4 min read
Save for later

Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems?

Fatema Patrawala
02 Jul 2019
4 min read
The emergence of worker solidarity and organization throughout the tech industry has been one of the few upsides to a difficult 18 months. And although it might be tempting to see this wave as somehow separate from the technical side of building software, the reality is that worker power - and, indeed, worker safety and respect - are crucial to ensure safe and high quality software. Last week’s npm bug, reported by users last Friday, is a good case in point. It follows a matter of months after news in April of surprise layoffs, and accusations of punitive anti-union actions. It perhaps confirms what one former npm employee told The Register last month: "I think it’s time to break the in-case-of-emergency glass to assess how to keep JavaScript safe… Soon there won’t be any knowledgeable engineers left." What was the npm 6.9.1 bug? The npm 6.9.1 bug is complex. There are a number of layers to the issue, some of which relate to earlier iterations of the package manager. For those interested, Rebecca Turner, a former core contributor to npm who resigned her position at npm in March in response to the layoffs, explains in detail how the bug came about: “...npm publish ignores .git folders by default but forces all files named readme to be included… And that forced include overrides the exclude. And then there was once a remote branch named readme… and that goes in the .git folder, gets included in the publish, which then permanently borks your npm install, because of EISGIT, which in turn is a restriction that’s afaik entirely vestigial, copied forward from earlier versions of npm without clear insight into why you’d want that restriction in the first place.” Turner says she suspects the bug was “introduced with tar rewrite.” Whoever published it, she goes on to say, must have had a repository with a remote reference and had failed to follow the setup guide “which recommends using a separate copy of the repo for publication.” Kat Marchán, CLI and Community Architect at npm, later confirmed that to fix the issue the team had published npm 6.9.2, but said that users would have to uninstall it manually before upgrading. “We are discussing whether to unpublish 6.9.1 as well, but this should stop any further accidents,” Marchán said. The impact of npm’s internal issues The important subplot to all of this is the fact that it appears that npm 6.9.1 was delayed because of npm’s internal issues. A post on GitHub by Audrey Eschright, one of the employees who are currently filing a case against npm with the National Labor Relations Board, explained that work on the open source project had been interrupted because npm’s management had made the decision to remove “core employee contributors to the npm cli.” The implication, then, is that management’s attitude here has had a negative impact on npm 6.9.1. If the allegations of ‘union busting’ are true, then it would seem that preventing its workers from organizing to protect one another were more important than building robust and secure software. At a more basic level, whatever the reality of the situation, it would seem that npm’s management is unable to cultivate an environment that allows employees to do what they do best. Why is this significant? This is ultimately just a story about a bug. Not all that remarkable. But given the context, it’s significant because it highlights that tech worker organization, and how management responds to it, has a direct link to the quality and reliability of the software we use. If friction persists between the commercial leaders within a company and engineers, software is the thing that’s going to suffer. Read Next Surprise NPM layoffs raise questions about the company culture Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more! The npm engineering team shares why Rust was the best choice for addressing CPU-bound bottlenecks
Read more
  • 0
  • 0
  • 22262
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-experts-discuss-dark-patterns-and-deceptive-ui-designs-what-are-they-what-do-they-do-how-do-we-stop-them
Sugandha Lahoti
29 Jun 2019
12 min read
Save for later

Experts discuss Dark Patterns and deceptive UI designs: What are they? What do they do? How do we stop them?

Sugandha Lahoti
29 Jun 2019
12 min read
Dark patterns are often used online to deceive users into taking actions they would otherwise not take under effective, informed consent. Dark patterns are generally used by shopping websites, social media platforms, mobile apps and services as a part of their user interface design choices. Dark patterns can lead to financial loss, tricking users into giving up vast amounts of personal data, or inducing compulsive and addictive behavior in adults and children. Using dark patterns is unambiguously unlawful in the United States (under Section 5 of the Federal Trade Commission Act and similar state laws), the European Union (under the Unfair Commercial Practices Directive and similar member state laws), and numerous other jurisdictions. Earlier this week, at the Russell Senate Office Building, a panel of experts met to discuss the implications of Dark patterns in the session, Deceptive Design and Dark Patterns: What are they? What do they do? How do we stop them? The session included remarks from Senator. Mark Warner and Deb Fischer, sponsors of the DETOUR Act, and a panel of experts including Tristan Harris (Co-Founder and Executive Director, Center for Humane Technology). The entire panel of experts included: Tristan Harris (Co-Founder and Executive Director, Center for Humane Technology) Rana Foroohar (Global Business Columnist and Associate Editor, Financial Times) Amina Fazlullah (Policy Counsel, Common Sense Media) Paul Ohm (Professor of Law and Associate Dean, Georgetown Law School), also the moderator Katie McInnis (Policy Counsel, Consumer Reports) Marshall Erwin (Senior Director of Trust & Security, Mozilla) Arunesh Mathur (Dept. of Computer Science, Princeton University) Dark patterns are growing in social media platforms, video games, shopping websites, and are increasingly used to target children The expert session was inaugurated by Arunesh Mathur (Dept. of Computer Science, Princeton University) who talked about his new study by researchers from Princeton University and the University of Chicago. The study suggests that shopping websites are abundant with dark patterns that rely on consumer deception. The researchers conducted a large-scale study, analyzing almost 53K product pages from 11K shopping websites to characterize and quantify the prevalence of dark patterns. They so discovered 1,841 instances of dark patterns on shopping websites, which together represent 15 types of dark patterns. One of the dark patterns was Sneak into Website, which adds additional products to users’ shopping carts without their consent. For example, you would buy a bouquet on a website and the website without your consent would add a greeting card in the hopes that you will actually purchase it. Katie McInnis agreed and added that Dark patterns not only undermine the choices that are available to users on social media and shopping platforms but they can also cost users money. User interfaces are sometimes designed to push a user away from protecting their privacy, making it tough to evaluate them. Amina Fazlullah, Policy Counsel, Common Sense Media said that dark patterns are also being used to target children. Manipulative apps use design techniques to shame or confuse children into in-app purchases or trying to keep them on the app for longer. Children mostly are unable to discern these manipulative techniques. Sometimes the screen will have icons or buttons that will appear to be a part of game play and children will click on them not realizing that they're either being asked to make a purchase or being shown an ad or being directed onto another site. There are games which ask for payments or microtransactions to continue the game forward. Mozilla uses product transparency to curb Dark patterns Marshall Erwin, Senior Director of Trust & Security at Mozilla talked about the negative effects of dark patterns and how they make their own products at Mozilla more transparent.  They have a set of checks and principles in place to avoid dark patterns. No surprises: If users were to figure out or start to understand exactly what is happening with the browser, it should be consistent with their expectations. If the users are surprised, this means browsers need to make a change either by stopping the activity entirely or creating additional transparency that helps people understand. Anti-tracking technology: Cross-site tracking is one of the most pervasive and pernicious dark patterns across the web today that is enabled by cookies. Browsers should take action to decrease the attack surface in the browser and actively protect people from those patterns online.  Mozilla and Apple have introduced anti tracking technology to actively intervene to protect people from the diverse parties that are probably not trustworthy. Detour Act by Senators Warner and Fisher In April, Warner and Fischer had introduced the Deceptive Experiences To Online Users Reduction (DETOUR) Act, a bipartisan legislation to prohibit large online platforms from using dark patterns to trick consumers into handing over their personal data. This act focuses on the activities of large online service providers (over a hundred million users visiting in a given month). Under this act you cannot use practices that trick users into obtaining information or consenting. You will experience new controls about conducting ‘psychological experiments on your users’ and you will no longer be able to target children under 13 with the goal of hooking them into your service. It extends additional rulemaking and enforcement abilities to the Federal Trade Commission. “Protecting users personal data and user autonomy online are truly bipartisan issues”: Senator Mark Warner In his presentation, Warner talked about how 2019 is the year when we need to recognize dark patterns and their ongoing manipulation of American consumers.  While we've all celebrated the benefits that communities have brought from social media, there is also an enormous dark underbelly, he says. It is important that Congress steps up and we play a role as senators such that Americans and their private data is not misused or manipulated going forward. Protecting users personal data and user autonomy online are truly bipartisan issues. This is not a liberal versus conservative, it's much more a future versus past and how we get this future right in a way that takes advantage of social media tools but also put some of the appropriate constraints in place. He says that the driving notion behind the Detour act is that users should have the choice and autonomy when it comes to their personal data. When a company like Facebook asks you to upload your phone contacts or some other highly valuable data to their platform, you ought to have a simple choice yes or no. Companies that run experiments on you without your consent are coercive and Detour act aims to put appropriate protections in place that defend user's ability to make informed choices. In addition to prohibiting large online platforms from using dark patterns to trick consumers into handing over their personal data, the bill would also require informed consent for behavior experimentation. In the process, the bill will be sending a clear message to the platform companies and the FTC that they are now in the business of preserving user's autonomy when it comes to the use of their personal data. The goal, Warner says, is simple - to bring some transparency to what remains a very opaque market and give consumers the tools they need to make informed choices about how and when to share their personal information. “Curbing the use of dark patterns will be foundational to increasing trust online” : Senator Deb Fischer Fischer argued that tech companies are increasingly tailoring users’ online experiences in ways that are more granular. On one hand, she says, you get a more personalized user experience and platforms are more responsive, however it's this variability that allows companies to take that design just a step too far. Companies are constantly competing for users attention and this increases the motivation for a more intrusive and invasive user design. The ability for online platforms to guide the visual interfaces that billions of people view is an incredible influence. It forces us to assess the impact of design on user privacy and well-being. Fundamentally the detour act would prohibit large online platforms from purposely using deceptive user interfaces - dark patterns. The detour act would provide a better accountability system for improved transparency and autonomy online. The legislation would take an important step to restore the hidden options. It would give users a tool to get out of the maze that coaxes you to just click on ‘I agree’. A privacy framework that involves consent cannot function properly if it doesn't ensure the user interface presents fair and transparent options. The detour act would enable the creation of a professional standards body which can register with the Federal Trade Commission. This would serve as a self regulatory body to develop best practices for UI design with the FTC as a backup. She adds, “We need clarity for the enforcement of dark patterns that don't directly involve our wallets. We need policies that place value on user choice and personal data online. We need a stronger mechanism to protect the public interest when the goal for tech companies is to make people engage more and more. User consent remains weakened by the presence of dark patterns and unethical design. Curbing the use of dark patterns will be foundational to increasing trust online. The detour act does provide a key step in getting there.” “The DETOUR act is calling attention to asymmetry and preventing deceptive asymmetry”: Tristan Harris Tristan says that companies are now competing not on manipulating your immediate behavior but manipulating and predicting the future. For example, Facebook has something called loyalty prediction which allows them to sell to an advertiser the ability to predict when you're going to become disloyal to a brand. It can sell that opportunity to another advertiser before probably you know you're going to switch. The DETOUR act is a huge step in the right direction because it's about calling attention to asymmetry and preventing deceptive asymmetry. We need a new relationship for this  asymmetric power by having a duty of care. It’s about treating asymmetrically powerful technologies to be in the service of the systems that they are supposed to protect. He says, we need to switch to a regenerative energy economy that actually treats attention as sacred and not directly tying profit to user extraction. Top questions raised by the panel and online viewers Does A/B testing result in dark patterns? Dark patterns are often a result of A/B testing right where a designer may try things that lead to better engagement or maybe nudge users in a way where the company benefits. However, A/B testing isn't the problem, it’s the intention of how A/B testing is being used. Companies and other organizations should have an oversight on the different experiments that they are conducting to see if A/B testing is actually leading to some kind of concrete harm. The challenge in the space is drawing a line about A/B testing features and optimizing for engagement and decreasing friction. Are consumers smart enough to tackle dark patterns on their own or do we need a legislation? It's well established that for children whose brains are just developing, they're unable to discern these types of deceptive techniques so especially for kids, these types of practices should be banned. For vulnerable families who are juggling all sorts of concerns around income and access to jobs and transportation and health care, putting this on their plate as well is just unreasonable. Dark patterns are deployed for an array of opaque reasons the average user will never recognize. From a consumer perspective, going through and identifying dark pattern techniques--that these platform companies have spent hundreds of thousands  of dollars developing to be as opaque and as tricky as possible--is an unrealistic expectation put on consumers. This is why the DETOUR act and this type of regulation are absolutely necessary and the only way forward. What is it about the largest online providers that make us want to focus on them first or only? Is it their scale or do they have more powerful dark patterns? Is it because they're just harming more people or is it politics? Sometimes larger companies stay wary of indulging in dark patterns because they have a greater risk in terms of getting caught and the PR backlash. However, they do engage in manipulative practices and that warrants a lot of attention. Moreover, targeting bigger companies is just one part of a more comprehensive privacy enforcement environment. Hitting companies that have a large number of users is also great for consumer engagement.  Obviously there is a need to target more broadly but this is a starting point. If Facebook were to suddenly reclass itself and its advertising business model, would you still trust them? No, the leadership that's in charge now for Facebook can not be trusted, especially the organizational cultures that have been building. There are change efforts going on inside of Google and Facebook right now but it’s getting gridlocked. Even if employees want to see policies being changed, they still have bonus structures and employee culture to keep in mind. We recommend you to go through the full hearing here. You can read more about the Detour Act here. U.S. senators introduce a bipartisan bill that bans social media platforms from using ‘dark patterns’ to trick its users. How social media enabled and amplified the Christchurch terrorist attack A new study reveals how shopping websites use ‘dark patterns’ to deceive you into buying things you may not want
Read more
  • 0
  • 0
  • 22147

article-image-vue-maintainers-proposed-listened-and-revised-the-rfc-for-hooks-in-vue-api
Bhagyashree R
28 Jun 2019
6 min read
Save for later

Vue maintainers proposed, listened, and revised the RFC for hooks in Vue API

Bhagyashree R
28 Jun 2019
6 min read
The internet was ablaze when Evan You, creator of Vue, published an RFC to introduce a function-based component API earlier this month. This followed a huge discussion in the Vue community on whether such an API is really needed. https://twitter.com/youyuxi/status/1137567675356291072 This proposal came after Evan You previewed an experimental Hooks API back in November at Vue Conf Toronto 2018. Why Vue needs a function-based component API Components help you to abstract your code into smaller pieces. This gives your web applications a better structure, makes the code more readable and understandable and most importantly enables you to reuse logic across multiple components. According to the RFC, the components API in Vue 2.x has some drawbacks in terms of reusability. The three common patterns that are generally used to achieve reusability in Vue are mixins, high-order components, and renderless components. Each of these come with their share of drawbacks: Mixins bring implicit dependencies in code, causes name clashes, and make your code harder to understand. HOCs can often be verbose, involve lots of passing props and hoisting statics, and can cause name conflicts. Renderless components require extra stateful component instances that come at the cost of performance. This function-based component API aims to address all these drawbacks. Inspired by React Hooks, its objective is to provide developers a “clean and flexible way” to compose logic and share it between components. The team plans to achieve this by moving the logic code to a "composition function" and returning reactive state. Another motivation behind this proposed change is to provide better built-in TypeScript type inference support as function-based APIs are naturally type-friendly. Also, code written with function-based APIs compresses better than an object or class-based code. What Vue developers think about this RFC? The Vue community was a little taken aback with this proposal that will essentially change the way they used to write Vue. They were concerned that this will take away the most desirable property of Vue, which is its simplicity. Vue’s class-based API made it easy to understand and get started with. However, bringing function-based API to Vue will complex things in exchange for very fewer advantages. Some argued that this change will make it another React. “Like a lot of others here, I chose Vue vs React for the simplicity and readability of code. The class-based API was easy to understand and pick up. If I wanted React, I would have just chosen React from the beginning. I get that there are some technical advantages to doing this, but Vue 3 is starting to really turn me off of staying with Vue going forward,” a developer shared on a Reddit thread. Developers were concerned that the time they have invested in learning Vue will go to waste as everything is about to change. A Vue developer commented on Reddit, “You learn to do something one way and then they change it up on you. Might as well just switch to react at this point.” Many compared this scenario to that of Angular 1->2 or Python 2->3 and suggested switching to Svelte to avoid the mess. Some, however, liked the syntax and are looking forward to playing around with the API.  A developer shared, “But I read through, checked out the new (simpler) example, read Evan's arguments about logical task grouping, and on a second read with a more open mind, I actually kind of like the new syntax and am now looking forward to trying it out. I'm glad they agreed to keep the object syntax around though.” How the Vue team responded When the RFC was first published it implied that the current API will be deprecated in a future major release. Also, there was a lot of confusion around the "compatibility" and "stable" build.  Many developers felt that this RFC is already “set in stone” from the way it was communicated. They felt that the core team has already decided to bring this API to Vue without community consultation. So, one of the reasons behind this confusion was how the change was communicated. The team acknowledged this and asked for suggestions from the community to improve their communication. https://twitter.com/N_Tepluhina/status/1142715703558103040 The core team clarified that the update will be additive and the team has no plans to remove the Object API in a future major release. Evan You, the creator of Vue, said in a thread, “feel free to stay with the current API for as long as you wish. As long as the community feels there's a need for the old API to stay, it will stay. The only one that can make the decision to switch to the new API is yourself.” He also addressed the concerns on a Hacker News thread: There is a lot of FUD in this thread so we need to clarify a bit: - This API is purely additive to 2.x and doesn't break anything. - 3.0 will have a standard build which adds this API on top of 2.x API, and an opt-in "lean build" which drops a number of 2.x APIs for a smaller and faster runtime. - This is an open RFC, which means it's not set in stone. The whole point of having an RFC is so that users can voice their opinions. It's not like we are shipping this tomorrow. After listening to various perspectives shared by developers, the core team revised the RFC accordingly putting everybody finally at ease. Guillaume Chau, a member of the Vue core team, put out a clear and concise plan of action on Twitter to which people are responding positively. This plan reassured that the Object API will not be deprecated until the community stops using it and the proposed API will be first offered as a standalone plugin for Vue 2.x. https://twitter.com/Akryum/status/1143114880960126976 Some developers have also started to try out the new API: https://twitter.com/igor_randj/status/1143302939496370177 https://twitter.com/cmsalvado/status/1143230023089786880 Closing thoughts Open source programmers put their time and efforts in building software that helps the community and an RFC (request for comments) is a way for the community to get involved in building high quality software at scale. Through RFC you can share your constructive feedback on why a change is necessary or is not necessary. And, all this can be done in a respectful way. This showed us a very good example of how an RFC should really work. Publishing an RFC, discussing with the community, listening to the community, and deciding collectively what to do next. Despite some hiccups in communication, the Vue core team did a good job in engaging with the community to develop the roadmap for the function-based component API in Vue. Read the RFC for function-based component API for more details. Vue 2.6 is now out with a new unified syntax for slots, and more Learning Vue in 2019 with Anthony Gore’s developer knowledge map Evan You shares Vue 3.0 updates at VueConf Toronto 2018
Read more
  • 0
  • 0
  • 26456

article-image-im-concerned-about-libras-model-for-decentralization-says-co-founder-of-chainspace-facebooks-blockchain-acquisition
Fatema Patrawala
26 Jun 2019
7 min read
Save for later

“I'm concerned about Libra's model for decentralization”, says co-founder of Chainspace, Facebook’s blockchain acquisition

Fatema Patrawala
26 Jun 2019
7 min read
In February, Facebook made its debut into the blockchain space by acquiring Chainspace, a London-based, Gibraltar-registered blockchain venture. Chainspace was a small start-up founded by several academics from the University College London Information Security Research Group. Authors of the original Chainspace paper were Mustafa Al-Bassam, Alberto Sonnino, Shehar Bano, Dave Hrycyszyn and George Danezis, some of the UK’s leading privacy engineering researchers. Following the acquisition, last week Facebook announced the launch of its new cryptocurrency, Libra which is expected to go live by 2020. The Libra whitepaper involves a wide array of authors including the Chainspace co-founders namely Alberto Sonnino, Shehar Bano and George Danezis. According to Wired, David Marcus, a former Paypal president and a Coinbase board member, who resigned from the board last year, is appointed by Facebook to lead the project Libra. Libra isn’t like other cryptocurrencies such as Bitcoin or Ethereum. As per the Reuters report, the Libra blockchain will be permissioned, meaning that only entities authorized by the governing association will be able to run the computers. Mustafa Al-Bassam, one of the research co-founders of Chainspace who did not join Facebook posted a detailed Twitter thread yesterday. The thread included particularly his views on this new crypto-currency - Libra. https://twitter.com/musalbas/status/1143629828551270401 On Libra’s decentralized model being less censorship resistant Mustafa says, “I don't have any doubt that the Libra team is building Libra for the right reasons: to create an open, decentralized payment system, not to empower Facebook. However, the road to dystopia is paved with good intentions, and I'm concerned about Libra's model for decentralization.” He further pointed the discussion towards a user comment on GitHub which reads, “Replace "decentralized" with "distributed" in readme”. Mustafa explains that Libra’s 100 node closed set of validators is seen more as decentralized in comparison to Bitcoin. Whereas Bitcoin has 4 pools that control >51% of hashpower. According to the Block Genesis, decentralized networks are particularly prone to Sybil attacks due to their permissionless nature. Mustafa takes this into consideration and poses a question if Libra is Sybil resistant, he comments, “I'm aware that the word "decentralization" is overused. I'm looking at decentralization, and Sybil-resistance, as a means to achieve censorship-resistance. Specifically: what do you have to do to reverse or censor transaction, how much does it cost, and who has that power? My concern is that Libra could end up creating a financial system that is *less* censorship-resistant than our current traditional financial system. You see, our current banking system is somewhat decentralized on a global scale, as money travels through a network of banks.” He further explains that, “In the banking system there is no majority of parties that can collude together to deny two banks the ability to maintain a relationship which each other - in the worst case scenario they can send physical cash to each other, which does not require a ledger. It's permissionless.” Mustafa adds to this point with a surreal imagination that if Libra was the only way to transfer currency and it is less censorship resistant than we’d be in worse situations, he says, “With cryptocurrency systems (even decentralized ones), there is always necessarily a majority of consensus nodes (e.g. a 51% attack) that can collude together from censor or reverse transactions. So if you're going to create digital cash, this is extremely important to consider. With Libra, censorship-resistance is even more important, as Libra could very well end up being the world's "de facto" currency, and if the Libra network is the only way to transfer that currency, and it's less censorship-resistant, we're worse off than where we started.” On Libra's permissioned consensus node selection authority Mustafa says that, “Libra's current permissioned consensus node selection authority is derived directly from nation state-enforced (Switzerland's) organization laws, rather than independently from stakeholders holding sovereign cryptographic keys.” Source - Libra whitepaper What this means is the "root API" for Libra's node selection mechanism is the Libra Association via the Swiss Federal Constitution and the Swiss courts, rather than public key cryptography. Mustafa also pointed out that the association members for Libra are large $1b+ companies, and US-based. Source - Libra whitepaper To this there could be an argument that governments can regulate the people who hold those public keys, but a key difference is that this can't be directly enforced without access to the private key. Mustafa explained this point with an example from last year, where Iran tested how resistant global payments are to US censorship. Iran requested a 300 million Euro cash withdrawal via Germany's central bank which they rejected under US pressure. Mustafa added, “US sanctions have been bad on ordinary people in Iran, but they can at least use cash to transact with other countries. If people wouldn't even be able to use cash in the future because Libra digital cash isn't censorship-resistant, that would be *brutal*.” On Libra’s proof-of-stake based permissionless mechanism Mustafa argues that the Libra whitepaper confuses consensus with Sybil-resistance. His views are Sybil-resistant node selection through permissionless mechanisms such as proof-of-stake, which selects a set of cryptographic keys that participate in consensus, is necessarily more censorship-resistant than the Association-based model. Proof-of-stake is a Sybil-resistance mechanism, not a consensus mechanism. The "longest chain rule", on the other hand, is the consensus mechanism. He says that Libra has outlined a proof-of-stake-based permissionless roadmap and will transition to this in the next 5 years. Mustafa feels 5 years for this will be way too late when Group of seven nations (G7) are already lining up the taskforce to control Libra. Mustafa also thinks that it isn’t appropriate about Libra's whitepaper to claim the need to start permissioned for the next five years. He says permissionlessness and scalable secure blockchains are an unsolved technical problem, and they need community's help to research this. Source - Libra whitepaper He says, “It's as if they ignored the past decade of blockchain scalability research efforts. Secure layer-one scalability is a solved research problem. Ethereum 2.0, for example, is past the research stage and is now in the implementation stage, and will handle more than Libra's 1000tps.” Mustafa also points out that Chainspace was specifically in the middle of implementing a permissionless sharded blockchain with higher on-chain scalability than Libra's 1000tps. With FB's resources, this could've easily been accelerated and made a reality. He says, there are many research-led blockchain projects that have implemented or are implementing scalability strategies that achieve higher than Libra's 1000tps without heavily trading off security, so the "community" research on this is plentiful; it is just that Facebook is being lazy. He concludes, “I find it a great shame that Facebook has decided to be anti-social and launch a permissioned system as they need the community's help as scalable blockchains are an unsolved problem, instead of using their resources to implement on a decade of research in this area.” People have appreciated Mustafa on giving a detailed review of Libra, one of the tweets read, “This was a great thread, with several acute and correct observations.” https://twitter.com/ercwl/status/1143671361325490177 Another tweet reads, “Isn't a shard (let's say a blockchain sharded into 100 shards) by its nature trading off 99% of its consensus forming decentralization for 100x (minus overhead, so maybe 50x?) increased scalability?” Mustafa responded, “No because consensus participants are randomly sampled into shards from the overall consensus set, so shards should be roughly uniformly secure, and in the event that a shard misbehaves, fraud and data availability proofs kick in.” https://twitter.com/ercwl/status/1143673925643243522 In one of the tweets it is also suggested that 1/3 of Libra validators can enforce censorship even against the will of the 2/3 majority. In contrast it requires majority of miners to censor Bitcoin. Also unlike Libra, there is no entry barrier other than capital to become a Bitcoin miner. https://twitter.com/TamasBlummer/status/1143766691089977346 Let us know what are your views on Libra and how it is expected to perform. Facebook launches Libra and Calibra in a move to seriously disrupt the financial sector Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily, reports The Verge Facebook releases Pythia, a deep learning framework for vision and language multimodal research
Read more
  • 0
  • 0
  • 12721

article-image-a-new-study-reveals-how-shopping-websites-use-dark-patterns-to-deceive-you-into-buying-things-you-may-not-want
Sugandha Lahoti
26 Jun 2019
6 min read
Save for later

A new study reveals how shopping websites use ‘dark patterns’ to deceive you into buying things you may not want

Sugandha Lahoti
26 Jun 2019
6 min read
A new study by researchers from Princeton University and the University of Chicago suggests that shopping websites are abundant with dark patterns that rely on consumer deception. The researchers conducted a large-scale study, analyzing almost 53K product pages from 11K shopping websites to characterize and quantify the prevalence of dark patterns. They discovered 1,841 instances of dark patterns on shopping websites, which together represent 15 types of dark patterns. Note: All images in the article are taken from the research paper. What are dark patterns Dark patterns are generally used by shopping websites as a part of their user interface design choices. These dark patterns coerce, steer, or deceive users into making unintended and potentially harmful decisions, benefiting an online service. Shopping websites trick users into signing up for recurring subscriptions and making unwanted purchases, resulting in concrete financial loss. These patterns are not just limited to shopping websites, and find common applications on digital platforms including social media, mobile apps, and video games as well. At extreme levels, dark patterns can lead to financial loss, tricking users into giving up vast amounts of personal data, or inducing compulsive and addictive behavior in adults and children. Researchers used a web crawler to identify text-based dark patterns The paper uses an automated approach that enables researchers to identify dark patterns at scale on the web. The researchers crawled 11K shopping websites using a web crawler, built on top of OpenWPM, which is a web privacy measurement platform. The web crawler was used to simulate a user browsing experience and identify user interface elements. The researchers used text clustering to extract recurring user interface designs from the resulting data and then inspected the resulting clusters for instances of dark patterns. The researchers also developed a novel taxonomy of dark pattern characteristics to understand how dark patterns influence user decision-making. Based on the taxonomy, the dark patterns were classified basis whether they lead to an asymmetry of choice, are covert in their effect, are deceptive in nature, hide information from users, and restrict choice. The researchers also mapped the dark patterns in their data set to the cognitive biases they exploit. These biases collectively described the consumer psychology underpinnings of the dark patterns identified. They also determine that many instances of dark patterns are enabled by third-party entities, which provide shopping websites with scripts and plugins to easily implement these patterns on their websites. Key stats from the research There are 1,841 instances of dark patterns on shopping websites, which together represent 15 types of dark patterns and 7 broad categories. These 1,841 dark patterns were present on 1,267 of the 11K shopping websites (∼11.2%) in their data set. Shopping websites that were more popular, according to Alexa rankings, were more likely to feature dark patterns. 234 instances of deceptive dark patterns were uncovered across 183 websites 22 third-party entities were identified that provide shopping websites with the ability to create dark patterns on their sites. Dark pattern categories Sneaking Attempting to misrepresent user actions. Delaying information that users would most likely object to once made available. Sneak into Basket: The “Sneak into Basket” dark pattern adds additional products to users’ shopping carts without their consent Hidden Subscription:  Dark pattern charges users a recurring fee under the pretense of a one-time fee or a free trial Hidden Costs: Reveals new, additional, and often unusually high charges to users just before they are about to complete a purchase. Urgency Imposing a deadline on a sale or deal, thereby accelerating user decision-making and purchases. Countdown Timers: Dynamic indicator of a deadline counting down until the deadline expires. Limited-time Messages: Static urgency message without an accompanying deadline Misdirection Using visuals, language, or emotion to direct users toward or away from making a particular choice. Confirmshaming:  It uses language and emotion to steer users away from making a certain choice. Trick Questions: It uses confusing language to steer users into making certain choices. Visual Interference: It uses style and visual presentation to steer users into making certain choices over others. Pressured Selling: It refers to defaults or often high-pressure tactics that steer users into purchasing a more expensive version of a product (upselling) or into purchasing related products (cross-selling). Social proof Influencing users' behavior by describing the experiences and behavior of other users. Activity Notification:  Recurring attention grabbing message that appears on product pages indicating the activity of other users. Testimonials of Uncertain Origin: The use of customer testimonials whose origin or how they were sourced and created is not clearly specified. Scarcity Signalling that a product is likely to become unavailable, thereby increasing its desirability to users. Examples such as Low-stock Messages and High-demand Messages come under this category. Low-stock Messages: It signals to users about limited quantities of a product High-demand Messages: It signals to users that a product is in high demand, implying that it is likely to sell out soon. Obstruction Making it easy for the user to get into one situation but hard to get out of it. The researchers observed one type of the Obstruction dark pattern: “Hard to Cancel”. The Hard to Cancel dark pattern is restrictive (it limits the choices users can exercise to cancel their services). In cases where websites do not disclose their cancellation policies upfront, Hard to Cancel also becomes information hiding (it fails to inform users about how cancellation is harder than signing up). Forced Action Forcing the user to do something tangential in order to complete their task. The researchers observed one type of the Forced Action dark pattern: “Forced Enrollment” on 6 websites. Limitations of the research The researchers have acknowledged that their study has certain limitations. Only text-based dark patterns are taken into account for this study. There is still work needed to be done for inherently visual patterns (e.g., a change of font size or color to emphasize one part of the text more than another from an otherwise seemingly harmless pattern). The web crawling lead to a fraction of Selenium crashes, which did not allow researchers to either retrieve product pages or complete data collection on certain websites. The crawler failed to completely simulate the product purchase flow on some websites. They only crawled product pages and checkout pages, missing out on dark patterns present in other common pages such as the homepage of websites, product search pages, and account creation pages. The list of dark patterns can be downloaded as a CSV file. For more details, we recommend you to read the research paper. U.S. senators introduce a bipartisan bill that bans social media platforms from using ‘dark patterns’ to trick its users. How social media enabled and amplified the Christchurch terrorist attack Can an Open Web Index break Google’s stranglehold over the search engine market?
Read more
  • 0
  • 0
  • 26285
article-image-the-v-programming-language-is-now-open-sourced-is-it-too-good-to-be-true
Bhagyashree R
24 Jun 2019
5 min read
Save for later

The V programming language is now open source - is it too good to be true?

Bhagyashree R
24 Jun 2019
5 min read
Yesterday, a new statically-typed programming language named V was open sourced. It is described as a simple, fast, and compiled language for creating maintainable software. Its creator, Alex Medvednikov, says that it is very similar to Go and is inspired by Oberon, Rust, and Swift. What to expect from V programming language Fast compilation V can compile up to 1.2 million lines of code per second per CPU. It achieves this by direct machine code generation and strong modularity. If we decide to emit C code, the compilation speed drops to approximately 100k of code per second per CPU. Medvednikov mentions that direct machine code generation is still in its very early stages and right now only supports x64/Mach-O. He plans to make this feature stable by the end of this year. Safety It seems to be an ideal language because it has no null, global variables, undefined values, undefined behavior, variable shadowing, and does bound checking. It supports immutable variables, pure functions, and immutable structs by default. Generics are right now work in progress and are planned for next month. Performance According to the website, V is as fast as C, requires a minimal amount of allocations, and supports built-in serialization without runtime reflection. It compiles to native binaries without any dependencies. Just a 0.4 MB compiler Compared to Go, Rust, GCC, and Clang, the space required and build time of V are very very less. The entire language and standard library is just 400 KB and you can build it in 0.4s. By the end of this year, the author aims to bring this build time down to 0.15s. C/C++ translation V allows you to translate your V code to C or C++. However, this feature is at a very early stage, given that C and C++ are a very complex language. The creator aims to make this feature stable by the end of this year. What do developers think about this language? As much as developers like to have a great language to build applications, many felt that V is too good to be true. Looking at the claims made on the site some developers thought that the creator is either not being truthful about the capabilities of V or is scamming people. https://twitter.com/warnvod/status/1112571835558825986 A language that has the simplicity of Go and the memory management model of Rust is what everyone desires. However, the main reason that makes people skeptical about V is that there is not much proof behind the hard claims it makes. A user on Hacker news commented, “...V's author makes promises and claims which are then retracted, falsified, or untestable. Most notably, the source for V's toolchain has been teased repeatedly as coming soon but has never been released. Without an open toolchain, none of the claims made on V's front page [2] can be verified.” Another thing that makes this case concerning is that the V programming language is currently in alpha stage and is incomplete. Despite that, the creator is making $827 per month from his Patreon account. “However, advertising a product can do something and then releasing it stating it cannot do it yet, is one thing, but accepting money for a product that does not what is advertised, is a fraud,” a user commented. Some developers are also speculating that the creator is maybe just embarrassed to open source his code because of bad coding pattern choices. A user speculates, “V is not Free Software, which is disappointing but not atypical; however, V is not even open source, which precludes a healthy community. Additionally, closed languages tend to have bad patterns like code dumps over the wall, poor community communication, untrustworthy binary behaviors, and delayed product/feature releases. Yes, it's certainly embarrassing to have years of history on display for everybody to see, but we all apparently have gotten over it. What's hiding in V's codebase? We don't know. As a best guess, I think that the author may be ashamed of the particular nature of their bootstrap.” The features listed on the official website are incredible. The only concern was that the creator was not being transparent about how he plans to achieve them. Also, as this was closed source earlier, there was no way for others to verify the performance guarantees it promises that’s why so much confusion happened. Alex Medvednikov on why you can trust V programming On an issue that was reported on GitHub, the creator commented, “So you either believe me or you don't, we'll see who is right in June. But please don't call me a liar, scammer and spread misinformation.” Medvednikov was maybe overwhelmed by the responses and speculations, he was seeing on different discussion forums. Developing a whole new language requires a lot of work and perhaps his deadlines are ambitious. Going by the release announcement Medvednikov made yesterday, he is aware that the language designing process hasn’t been the most elegant version of his vision. He wrote, “There are lots of hacks I'm really embarrassed about, like using os.system() instead of native API calls, especially on Windows. There's a lot of ugly C code with #, which I regret adding at all.” Here’s great advice shared by a developer on V’s GitHub repository: Take your time, good software takes time. It's easy to get overwhelmed building Free software: sometimes it's better to say "no" or "not for now" in order to build great things in the long run :) Visit the official website of the V programming language for more detail. Docker and Microsoft collaborate over WSL 2, future of Docker Desktop for Windows is near Pull Panda is now a part of GitHub; code review workflows now get better! Scala 2.13 is here with overhauled collections, improved compiler performance, and more!
Read more
  • 0
  • 0
  • 34089

article-image-edge-chrome-brave-share-updates-on-upcoming-releases-recent-milestones-and-more-at-state-of-browsers-event
Bhagyashree R
24 Jun 2019
9 min read
Save for later

Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event

Bhagyashree R
24 Jun 2019
9 min read
Last month, This Dot Labs, a framework-agnostic JavaScript consultancy, conducted its biannual online live streaming event, This.JavaScript - State of Browsers. In this live stream, representatives of popular browsers talk about the amazing features users can look forward to, next releases, and much more. This time Firefox was missing. However, in attendance were: Stephanie Drescher ,  Program Manager, Microsoft Edge Brian Kardell ,  Developer Advocate, Igalia, an active contributor to WebKit Rijubrata Bhaumik , Software Engineer, Intel, who talked about Intel’s contribution towards web Jonathan Sampson ,  Developer Relations, Brave Paul Kinlan , Sr. Developer Advocate, Google Diego Gonzalez, Product Manager, Samsung Internet The event was moderated by Tracy Lee , who is the  founder of This Dot Labs. Following are some of the updates shared by the browser representatives: What’s new with Edge In December last year, Microsoft announced that it will be adopting Chromium in the development of Microsoft Edge for desktop. And, beginning this year we saw its decision coming to fruition. The tech giant made the first preview builds of the Chromium-based Edge available to both macOS and Windows 10 users. These preview builds are available for testing from the Microsoft Edge Insider site. This Chromium-powered Edge is available for iOS and Android users too. Stephanie Drescher shared what has changed for the Edge team after switching to Chromium. This is enabling them to deliver and update the Edge browser across all supported versions of Windows. This is also allowing them to update the browser more frequently as they are no longer tied to the operating system. The Edge team is not just using Chromium but also contributing all the web platform enhancements back to Chromium by default. The team has already made 400+ commits into the Chromium project. Edge comes with support for cross-platform and installable progressive web apps directly from the browser. The team’s next focus area is to improve Windows experience in terms of accessibility, localization, scrolling, and touch. At Build 2019, Microsoft also announced its new WebView that will be available for Win32 and UWP apps. She said this “will give you the option of an evergreen Chromium platform via edge or the option to bring your own version for AppCompat via a model that's similar to Electron.” Moving on to dev tools, the browser has several new dev tools that are visually aligned with VS Code. The updates in dev tools include dark mode on by default, control inputs, and the team is further exploring “more ways to align the experience between your browser dev tools and VS Code.” The browser’s built-in tools can now inspect and debug any Microsoft-Edge powered web content including PWAs, WebView, etc. No doubt these are some amazing features to be excited for. Edge has come to iOS and macOS, however, the question of whether it will support Linux in the future remains unanswered. Drescher said that the team has no plans right now to support Linux, however looking at the number of user requests for Linux support they are starting to think about it. What’s new with Chrome At I/O 2019, Google shared its vision for Chrome, which is making it "instant, powerful, and safe" to help improve the overall browsing experience. To make Chrome faster and lighter, a bunch of improvements to V8, Chrome’s JavaScript engine has been made. Now, JavaScript memory usage is down by 20% for real-world apps. After addressing the startup bottlenecks, Chrome's loading speed has now become 50% better on low-end devices and 10 percent across devices. The scrolling performance has also improved by 18%. Along with these speed gains, the team has also introduced a few features in the web platform that aim to take the burden away from the developers: The lazy loading mechanism reduces the initial payload to improve load time. You just need to add “loading=lazy" in the image or iframe elements. The idea is simple, the web browser will not download an image or iframe that has the loading attribute until the user scrolls near to it. The Portals API, first showcased at I/O this year, aims to make navigation between sites and web pages smoother. Portals is very similar to iframe in that it allows web developers to embed remote content in their pages. The difference is that with Portals you will able to navigate inside the content you are embedding. As a part of making Chrome more powerful, Google is actively working on bridging the capabilities gap between native and web under Project Fugu. It has already introduced two APIs: Web Share and Web Share Target and plans to bring more capabilities like writable file API, event alarms, user idle detection, and more. As the name suggests, the Web Share API allows websites to invoke the native sharing capabilities of the host platform. Users will be able to easily share either a URL or text on pretty much any platform they want to. Till date, we were restricted to share content on native apps that have registered as a share target. With Web Share Target API, installed web apps can also register with the underlying OS as a target to receive shared content. Talking about the safety aspect, Chrome now comes with support for WebAuthn, a new authentication standard by W3C, starting from its 67 version. This API allows servers to integrate strong authenticators that are built into devices, for instance, Windows Hello or Apple’s Touch ID. What's new with Brave Edge, Chrome, and Brave share one common thing and that is they all are Chromium-based. But, what sets Brave apart is the Basic Attention Token (BAT). Jonathan Sampson, who was representing Brave, said that we have seen a “Cambrian Explosion” of cryptocurrencies utility tokens or blockchain assets like Bitcoin, Litecoin, Etherium. Partnership with Coinbase Previously, if we wanted to acquire these assets there was only one way to do it “mining”, which meant a huge investment on expensive GPUs and power bill. Brave believes that the next step to earn these assets is primarily by your “attention”. Brave’s goal is to take users from mining to earning blockchain assets. As a part of this goal, it has partnered with Coinbase, one of the prominent companies in the blockchain space. Users will get 10 dollars in the form of BAT just for learning the state of digital advertising and what Brave and attention tokens are doing in that space. Through BAT, Brave is providing its consumers with a direct way to support their content creators. These content creators can customize and personalize this entire experience by navigating to the signing up on Brave’s creators page. Implementation changes in how BAT is sent to creators The Brave team has also made some implementation changes in terms of how this whole thing works. Previously, consumers could send these tokens to anyone. The token then used to go into an omnibus settlement wallet and stays there until that creator verifies with the program and demonstrates ownership over their web property. Finally, after all this, they get access to these tokens for use. Unfortunately, this could mean that some tokens have to “sit in a state of limbo” for an indefinite amount of time. Now, the team has re-engineered this process to hold these tokens inside your wallet for up to 90 days. If and when that property is verified the tokens are transmitted out. And, if the property is never verified then the tokens are released back inside your wallet. You can send them to another creator instead of letting them sit in that omnibus settlement wallet. Sampson further added, “of course the entire process goes through the anonymize protocol so that brave nor anybody else has any idea which websites you're visiting or to whom you are contributing support.” Inner working of Brave ads To better the ads recommendation Brave comes with a machine learning model integrated. This feature is opt-in so the user gets to decide when and how many ads they want to see in order to earn BAT from their attention. The ML model can study the user and learn about them each day. Every day a catalog is downloaded to each users’ device. Then the individual machines would churn away on that catalog to figure out which ads are relevant to an individual. Once, the relevant ads are found out users will see a small operating system notification. Brave sends 70% of the revenue made from the users’ attention to the user in the form of BAT. Brave Sync (Beta) The beta version of Brave Sync is available across platforms from Windows, macOS, Linux to Android, and iOS. Similar to Brave Ads, this is also an opt-in feature that allows you to automatically sync browsing data across devices. Right now it is in beta and supports syncing only bookmarks. In the future releases, we can expect support for tabs, history, passwords, autofill, as well as Brave Rewards. Once you enable it on one device, you just need to scan a QR code or enter a secret phrase to register another device for syncing. Canary builds available Like all the other browsers, Brave has also started to share their nightly and dev builds to give developers an “earlier insight” into the work they are doing. You can access them through their download page. These were some of the major updates discussed in the live stream. There was also Intel and Samsung who talked about their contributions to the web. Igalia’s developer Brian Kardell talked about the dark mode, pointer events, and more in WebKit. Watch the full event on YouTube for more details. https://www.youtube.com/watch?v=olSQai4EUD8 Elvis Pranskevichus on limitations in SQL and how EdgeQL can help Microsoft makes the first preview builds of Chromium-based Edge available for testing Brave introduces Brave Ads that share 70% revenue with users for viewing ads
Read more
  • 0
  • 0
  • 17604

article-image-raspberry-pi-4-is-up-for-sale-at-35-with-64-bit-arm-core-up-to-4gb-memory-full-throughput-gigabit-ethernet-and-more
Vincy Davis
24 Jun 2019
5 min read
Save for later

Raspberry Pi 4 is up for sale at $35, with 64-bit ARM core, up to 4GB memory, full-throughput gigabit Ethernet and more!

Vincy Davis
24 Jun 2019
5 min read
Today, the Raspberry Pi 4 model is up for sale, starting at $35. It has a 1.5GHz quad-core 64-bit ARM Cortex-A72 CPU, three memory options of up to 4GB, full-throughput gigabit Ethernet, Dual-band 802.11ac wireless networking, two USB 3.0 and two USB 2.0 ports, a complete compatibility with earlier Raspberry Pi products and more. Eben Upton, Chief Executive at Raspberry Pi Trading has said that “This is a comprehensive upgrade, touching almost every element of the platform.” This is the first Raspberry Pi product available offline, since the opening of their store in Cambridge, UK.   https://youtu.be/sajBySPeYH0   What’s new in Raspberry Pi 4? New Raspberry Pi silicon Previous Raspberry Pi models are based on 40nm silicon. However, the new Raspberry Pi 4 is a complete re-implementation of BCM283X on 28nm. The power saving delivered by the smaller process geometry has enabled the use of Cortex-A72 core, which has a 1.5GHz quad-core 64-bit ARM. The Cortex-A72 core can execute more instructions per clock, yielding four times performance improvement, over Raspberry Pi 3B+, depending on the benchmark. New Raspbian software The new Raspbian software provides numerous technical improvements, along with an extensively modernized user interface, and updated applications including the Chromium 74 web browser. For Raspberry Pi 4, the Raspberry team has retired the legacy graphics driver stack used on previous models and opted for the Mesa “V3D” driver. It offers benefits like OpenGL-accelerated web browsing and desktop composition, and also eliminates roughly half of the lines of closed-source code in the platform. Raspberry Pi 4 memory options For the first time, Raspberry Pi 4 is offering a choice of memory capacities, as shown below: All three variants of the new Raspberry Pi model have been launched. The entry-level Raspberry Pi 4 Model B is priced at 35$, excluding sales tax, import duty, and shipping. Additional improvements in Raspberry Pi 4 Power Raspberry Pi 4 has USB-C as the power connector, which will support an extra 500mA of current, ensuring 1.2A for downstream USB devices, even under heavy CPU load. Video The previous type-A HDMI connector has been replaced with a pair of type-D HDMI connectors, so as to accommodate dual display output within the existing board footprint. Ethernet and USB The Gigabit Ethernet magjack has been moved to the top right of the board, hence simplifying the PCB routing. The 4-pin Power-over-Ethernet (PoE) connector is in the same location, thus Raspberry Pi 4 remains compatible with the PoE HAT. The Ethernet controller on the main SoC is connected to an external Broadcom PHY, thus providing full throughput. USB is provided via an external VLI controller, connected over a single PCI Express Gen 2 lane, and providing a total of 4Gbps of bandwidth, shared between the four ports. The Raspberry Pi 4 model has the LPDDR4 memory technology, with triple bandwidth. It has also upgraded the video decode, 3D graphics, and display output to support 4Kp60 throughput. Onboard Gigabit Ethernet and PCI Express controllers have been added to address the non-multimedia I/O limitations of the previous devices. Image Source: Raspberry Pi blog New Raspberry Pi 4 accessories Due to the connector and form-factor changes, Raspberry Pi 4 has the requirement of new accessories. The Raspberry Pi 4 has its own case, priced at $5. It also has developed a suitable 5V/3A power supply, which is priced at $8 and is available in the UK, European, North American and Australian plug formats. The Raspberry Pi 4 Desktop Kit is also available and priced at $120. While the earlier Raspberry Pi models will be available in the market, Upton has mentioned that Raspberry Pi will continue to build these models as long as there's a demand for them. Users are quite ecstatic with the availability of Raspberry Pi 4 and many have already placed orders for it. https://twitter.com/Morphy99/status/1143103131821252609 https://twitter.com/M0VGA/status/1143064771446677509 A user on Reddit comments, “Very nice. Gigabit LAN and 4GB memory is opening it up to a hell of a lot more use cases. I've been tempted by some of the Pi's higher-specced competitors like the Pine64, but didn't want to lose out on the huge community behind the Pi. This seems like the best of both worlds to me.” A user on Hacker News says that “Oh my! This is such a crazy upgrade. I've been using the RPI2 as my HTPC/NAS at my folks, and I'm so happy with it. I was itching to get the last one for myself. USB 3.0! Gigabit Ethernet! WiFi 802.11ac, BT 5.0, 4GB RAM! 4K! $55 at most?! What the!? How the??! I know I'm not maintaining decorum at Hacker News, but I am SO mighty, MIGHTY excited! I'm setting up a VPN to hook this (when I get it) to my VPS and then do a LOT of fun stuff back and forth, remotely, and with the other RPI at my folks.” Another comment reads “This is absolutely great. The RPi was already exceptional for its price point, and this version seems to address the few problems it had (lack of Gigabit, USB speed and RAM capacity) and add onto it even more features. It almost seems too good to be true. Can't wait!” Another user says that “I'm most excited about the modern A72 cores, upgraded hardware decode, and up to 4 GB RAM. They really listened and delivered what most people wanted in a next gen RPi.” For more details, head over to the Raspberry Pi official blog. You can now install Windows 10 on a Raspberry Pi 3 Setting up a Raspberry Pi for a robot – Headless by Default [Tutorial] Introducing Strato Pi: An industrial Raspberry Pi
Read more
  • 0
  • 0
  • 22923
article-image-deepfakes-house-committee-hearing-risks-vulnerabilities-and-recommendations
Vincy Davis
21 Jun 2019
16 min read
Save for later

Deepfakes House Committee Hearing: Risks, Vulnerabilities and Recommendations

Vincy Davis
21 Jun 2019
16 min read
Last week, the House Intelligence Committee held a hearing to examine the public risks posed by “deepfake” videos. Deepfake is identified as a technology that alters audio or video and then is passed off as true or original content. In this hearing, experts on AI and digital policy highlighted to the committee, deepfakes risk to national security, upcoming elections, public trust and the mission of journalism. They also offered potential recommendations on what Congress could do to combat deepfakes and misinformation. The chair of the committee Adam B. Schiff, initiated the hearing by stating that it is time to regulate the technology of deepfake videos as it is enabling sinister forms of deception and disinformation by malicious actors. He adds that “Advances in AI or machine learning have led to the emergence of advance digitally doctored type of media, the so-called deepfakes that enable malicious actors to foment chaos, division or crisis and have the capacity to disrupt entire campaigns including that for the Presidency.” For a quick glance, here’s a TL;DR: Jack Clerk believes that governments should be in the business of measuring and assessing deepfake threats by looking directly at the scientific literature and developing a base knowledge of it. David Doermann suggests that tools and processes which can identify fake content should be made available in the hands of individuals, rather than relying completely on the government or on social media platforms to police content. Danielle Citron warns that the phenomenon of deepfake is going to be increasingly felt by women and minorities and for people from marginalized communities. Clint Watts provides a list of recommendations which should be implemented to prohibit U.S. officials, elected representatives and agencies from creating and distributing false and manipulated content. A unified standard should be followed by all social media platforms. Also they should be pressurized to have a 10-15 seconds delay in all videos, so that they can decide, to label a particular video or not. Regarding 2020 Presidential election: State governments and social media companies should be ready with a response plan, if a fake video surfaces to cause disrupt. It was also recommended that the algorithms to make deepfakes should be open sourced. Laws should be altered, and strict actions should be awarded, to discourage deepfake videos. Being forewarned is forearmed in case of deepfake technology Jack Clerk, OpenAI Policy Director, highlighted in his testimony that he does not think A.I. is the cause of any disruption, but actually is an “accelerant to an issue which has been with us for some time.'' He adds that computer software aligned with A.I. technology has become significantly cheaper and more powerful, due to its increased accessibility. This has led to its usage in audio or video editing, which was previously very difficult. Similar technologies  are being used for production of synthetic media. Also deepfakes are being used in valuable scientific research. Clerk suggests that interventions should be made to avoid its misuse. He believes that “it may be possible for large-scale technology platforms to try and develop and share tools for the detection of malicious synthetic media at both the individual account level and the platform level. We can also increase funding.” He strongly believes that governments should be in the business of measuring and assessing these threats by looking directly at the scientific literature and developing a base knowledge. Clerk concludes saying that “being forewarned is forearmed here.” Make Deepfake detector tools readily availaible David Doermann, the former Project Manager at the Defense Advanced Research Projects Agency mentions that the phrase ‘seeing is believing’ is no longer true. He states that there is nothing fundamentally wrong or evil about the technology, like basic image and video desktop editors, deepfakes is only a tool. There are a lot of positive applications of generative networks just as there are negative ones. He adds that, as of today, there are some solutions that can identify deepfakes reliably. However, Doermann fears that it’s only a matter of time before the current detection capabilities will be rendered less effective. He adds that “it's likely to get much worse before it gets much better.” Doermann suggests that tools and processes which can identify such fake content should be made available in the hands of individuals, rather than relying completely on the government or on social media platforms to police content. At the same time, there should also be ways to verify it or prove it or easily report it. He also hopes that automated detection tools will be developed, in the future, which will help in filtering and detection at the front end of the distribution pipeline. He also adds that “appropriate warning labels should be provided, which suggests that this is not real or not authentic, or not what it's purported to be. This would be independent of whether this is done and the decisions are made, by humans, machines or a combination.” Groups most vulnerable to Deepfake attacks Women and minorities Danielle Citron, a Law Professor at the University of Maryland, describes Deepfake as “particularly troubling when they're provocative and destructive.” She adds that, we as humans, tend to believe what our eyes and ears are telling us and also tend to share information that confirms our biases. It’s particularly true when that information is novel and negative, so the more salacious, we're more willing to pass it on. She also specifies that the deepfakes on social media networks are ad-driven. When all of this is put together, it turns out that the more provocative the deepfake is, the salacious will be the spread virally.  She also informed the panel committee about an incident, involving an investigative journalist in India, who had her posters circulated over the internet and deepfake sex videos, with her face morphed into pornography, over a provocative article. Citron thus states that “the economic and the social and psychological harm is profound”. Also based on her work in cyber stalking, she believes that this phenomenon is going to be increasingly felt by women and minorities and for people from marginalized communities. She also shared other examples explaining the effect of deepfake on trades and businesses. Citron also highlighted that “We need a combination of law, markets and really societal resilience to get through this, but the law has a modest role to play.” She also mentioned that though there are laws to sue for defamation, intentional infliction of emotional distress, privacy torture, these procedures are quite expensive. She adds that criminal law offers very less opportunity for the public to push criminals to the next level. National security Clint Watts, a Senior Fellow at the Foreign Policy Research Institute provided insight into how such technologies can affect national security. He says that “A.I. provides purveyors of disinformation to identify psychological vulnerabilities and to create modified content digital forgeries advancing false narratives against Americans and American interests.” Watts suspects that Russia, “being an enduring purveyor of disinformation is and will continue to pursue the acquisition of synthetic media capability, and employ the output against adversaries around the world.” He also adds that China, being the U.S. rival, will join Russia “to get vast amounts of information stolen from the U.S. The country has already shown a propensity to employ synthetic media in broadcast journalism. They'll likely use it as part of disinformation campaigns to discredit foreign detractors, incite fear inside western-style democracy and then, distort the reality of audiences and the audiences of America's allies.” He also mentions that deepfake proliferation can present a danger to American constituency by demoralizing it. Watts suspects that the U.S. diplomats and military personnel deployed overseas, will be prime target for deepfake driven disinformation planted by adversaries. Watts provided a list of recommendations which should be implemented to “prohibit U.S. officials, elected representatives and agencies from creating and distributing false and manipulated content.” The U.S. government must be the sole purveyor of facts and truth to constituents, assuring the effective administration of democracy via productive policy debate from a shared basis of reality. Policy makers should work jointly with social media companies to develop standards for content and accountability. The U.S. government should partner with private sectors to implement digital verification designating a date, time and physical origination of the content. Social media companies should start labeling videos, and forward the same across all platforms. Consumers should be able to determine the source of the information and whether it's the authentic depiction of people and events. The U.S. government from a national security perspective, should maintain intelligence on capabilities of adversaries to conduct such information. The departments of defense and state should immediately develop response plans, for deepfake smear campaigns and mobilizations overseas, in an attempt to mitigate harm. Lastly he also added that public awareness of deepfakes and signatures, will assist in tamping down attempts to subvert the  U.S. democracy and incite violence. Schiff asked the witnesses, if it's “time to do away with the immunity that social media platforms enjoy”, Watts replied in the affirmative and listed suggestions in three particular areas. If social media platforms see something spiking in terms of virality, it should be put in a queue for human review, linked to fact checkers, then down rate it and don't let it into news feeds. Also make the mainstream understand what is manipulated content. Anything related to outbreaks of violence and public safety should be regulated immediately. Anything related to elected officials or public institutions, should immediately be flagged and pulled down and checked and then a context should be given to it. Co-chair of the committee, Devin Nunes asked Citron what kind of filters can be placed on these tech companies, as “it's not developed by partisan left wing like it is now, where most of the time, it's conservatives who get banned and not democrats”. Citron suggested that proactive filtering won’t be possible and hence companies should react responsibly and should be bipartisan. She added that “but rather, is this a misrepresentation in a defamatory way, right, that we would say it's a falsehood that is harmful to reputation. that's an impersonation, then we should take it down. This is the default I am imagining.” How laws could be altered according to the changing times, to discourage deepfake videos Citron says that laws could be altered, like in the case of Section 230 C. It states that “No speaker or publisher -- or no online service shall be treated as a speaker or publisher of someone else's content.” This law can be altered to “No online service that engages in reasonable content moderation practices shall be treated as a speaker or publisher of somebody else's content.” Citron believes that avoiding reasonability could lead to negligence of law. She also adds that “I've been advising Twitter and Facebook all of the time. There is meaningful reasonable practices that are emerging and have emerged in the last ten years. We already have a guide, it's not as if this is a new issue in 2019. So we can come up with reasonable practices.” Also Watts added that if any adversary from big countries like China, Iran, Russia makes a deepfake video to push the US downwards, we can trace them back if we have aggressive laws at our hand. He says it could be anything from an “arrest and extradition, if the sanction permits, response should be individually, or in terms of cyber response”, could help us to discourage deepfake. How to slow down the spread of videos One of the reasons that these types of manipulated images gain traction is because it's almost instantaneous - they can be shared around the world, shared across platforms in a few seconds. Doermann says that these social media platforms must be pressurized to have a 10-15 seconds delay, so that it can be decided whether to label a particular video or not. He adds that “We've done it for child pornography, we've done it for human trafficking, they're serious about those things. This is another area that's a little bit more in the middle, but I think they can take the same effort in these areas to do that type of triage.” This delay will allow third parties or fact checkers to decide on the authenticity of videos and label them. Citron adds that this is where labelling a particular video can help, “I think it is incredibly important and there are times in which, that's the perfect rather than second best, and we should err on the side of inclusion and label it as synthetic.” The representative of Ohio, Brad Wenstrup added that we can have internal extradition laws, which can punish somebody when “something comes from some other country, maybe even a friendly country, that defames and hurts someone here”. There should be an agreement among nations that “we'll extradite those people and they can be punished in your country for what they did to one of your citizens.” Terri Sewell, the Representative of Alabama further probed about the current scenario of detecting fake videos, to which Doermann replied that currently we have enough solutions to detect a fake video, however with a constant delay of 15-20 minutes. Deepfakes and 2020 Presidential elections Watts says that he’s concerned about deepfakes acting on the eve of election day 2020. Foreign adversaries may use a standard disinformation approach by “using an organic content that suits their narrative and inject it back.” This can escalate as more people are making deepfakes each year. He also added that “Right now I would be very worried about someone making a fake video about electoral systems being out or broken down on election day 2020.” So state governments and social media companies should be ready with a response plan in the wake of such an event. Sewell then asked the witnesses for suggestions on campaigns to political parties/candidates so that they are prepared for the possibility of deepfake content. Watts replied that the most important thing to counter fake content would be a unified standard, that all the social media industries should follow. He added that “if you're a manipulator, domestic or international, and you're making deep fakes, you're going to go to whatever platform allows you to post anything from inauthentic accounts. they go to wherever the weak point is and it spreads throughout the system.” He believes that this system would help counter extremism, disinformation and political smear campaigns. Watts added any sort of lag in responding to such videos should be avoided as “any sort of lag in terms of response allows that conspiracy to grow.” Citron also pointed out that firstly all candidates should have a clear policy about deep fakes and should commit that they won’t use them or spread them. Should the algorithms to make deepfakes be open sourced? Doermann answered that the algorithms of deepfakes have to be absolutely open sourced. He says that though this might help adversaries, but they are anyway going to learn about it. He believes this is significant as, “We need to get this type of stuff out there. We need to get it into the hands of users. There are companies out there that are starting to make these types of things.” He also states that people should be able to use this technology. The more we educate them, more the tools they learn, more the correct choices people can make. On Mark Zuckerberg’s deepfake video On being asked to comment on the decision of Mark Zuckerberg to not take down his deepfake video from his own platform, Facebook, Citron replied that Mark gave a perfect example of “satire and parody”, by not taking down the video. She added that private companies can make these kinds of choices, as they have an incredible amount of power, without any liability, “it seemed to be a conversation about the choices they make and what does that mean for society. So it was incredibly productive, I think.” Watts also opined that he likes Facebook for its consistency in terms of enforcement and that they are always trying to learn better things and implement it. He adds that he really like Facebook as its always ready to hear “from legislatures about what falls inside those parameters. The one thing that I really like is that they're doing is identifying inauthentic account creation and inauthentic content generation, they are enforcing it, they have increased the scale,and it is very very good in terms of how they have scaled it up, it’s not perfect, but it is better.”   Read More: Zuckberg just became the target of the world’s first high profile white hat deepfake op. Can Facebook come out unscathed? On the Nancy Pelosi doctored video Schiff asked the witnesses if there is any account on the number of millions of people who have watched the doctored video of Nancy Pelosi, and an account of how many of them ultimately got to know that it was not a real video. He said he’s asking this as according to psychologists, people never really forget their once constructed negative impression. Clarke replied that “Fact checks and clarifications tend not to travel nearly as far as the initial news.” He added that its becomes a very general thing as “If you care, you care about clarifications and fact checks. but if you're just enjoying media, you're enjoying media. You enjoy the experience of the media and the absolute minority doesn’t care whether it's true.” Schiff also recalled how in 2016, “some foreign actresses, particularly Russia had mimicked black lives matter to push out continent to racially divide people.” Such videos gave the impression of police violence, on people of colour. They “certainly push out videos that are enormously jarring and disruptive.” All the information revealed in the hearing was described as “scary and worrying”, by one of the representatives. The hearing was ended by Schiff, the chair of the committee, after thanking all the witnesses for their testimonies and recommendations. For more details, head over to the full Hearing on deepfake videos by the House Intelligence Committee. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts Lawmakers introduce new Consumer privacy bill and Malicious Deep Fake Prohibition Act to support consumer privacy and battle deepfakes Machine generated videos like Deepfakes – Trick or Treat?
Read more
  • 0
  • 0
  • 22425

article-image-shopify-announces-fulfillment-network-video-and-3d-model-assets-custom-storefront-tools-and-more
Vincy Davis
20 Jun 2019
6 min read
Save for later

Shopify announces Fulfillment network, video and 3D model assets, custom storefront tools and more!

Vincy Davis
20 Jun 2019
6 min read
At the ongoing Shopify Unite 2019 conference, Shopify has announced a number of new products like Fulfillment network, video and 3D model assets, custom storefront tools, new online store design experience and others. Most of these products will be launched later this year. Here are the major highlights: Shopify Fulfillment Network Shopify Fulfillment Network, will be a dispersed network of fulfillment centers which uses machine learning to automatically select the optimal inventory quantities per location, as well as the closest fulfillment option for each of the customers’ shipments. Once available, it will be possible to simply install the app, select the products, get a quote, and begin selling the products. Entrepreneurs will be able to provide fast, low-cost delivery to their customers while maintaining ownership of customer data and a branded shipping experience. A single back office A merchants order, inventory, and customer data will stay synced and up-to-date across all warehouse locations and channels. Recommended warehouse locations To save costs on shipping, it will be possible to find the best locations, based on where the sales are coming from. Low stock alerts When inventory runs low, merchants will know when to replenish to continue meeting demands. 99.5% order accuracy The correct package will be chosen and out the door on time, with good accuracy. Hands-on warehouse help A dedicated account manager will assist the merchant in finding the best path, to reach their customers so the costs can be kept low. Shopify Fulfillment Network is currently available in the United States, and interested merchants can apply for early access. https://twitter.com/treklightgear/status/1141406217815724034 Native support to video and 3D model assets Shopify’s product will natively support video and 3D model assets, thus adding a new dimension to products and providing a richer purchase experience for customers. This feature is expected to be released later this year. Manage media through a single location It will now be possible to upload, access, and store video and 3D models from the same place where the images are being managed currently. Deploy through the new Shopify video player Users can use, one of the 10 starter themes, to easily display video 3D models using the new Shopify player for video or the viewer for Shopify AR. New editor apps Shopify is inviting partners and developers to create additional apps and custom integrations to open up new ways, to create and modify images, videos, and AR experiences. https://twitter.com/Scobleizer/status/1141456478496161797 https://twitter.com/thewakdesigns/status/1141607172670926848 New Online Store Design Experience The new online store design will provide entrepreneurs, with more options for customization. This will give them more control over the layout and aesthetic of their store. This feature is expected to be released later this year. Easier customization at the page and store level Any page can now be customized by using sections, just like the homepage. It is also now possible for users to save time by setting content on multiple pages, using master pages. Portable content that moves with you From now on, users will not have to make a duplicate of their theme or move content over manually. The shop’s content will follow the owner so that they are able to make changes, like downloading a new version or trying out a new one. A new workspace to update your store It will now be possible to edit and preview updates before publishing. Any minor tweak or major changes can be done in a new space to draft changes. Custom storefront tools Shopify’s Storefront API allows customers to use the custom storefront tools to free their storefront from certain back-end dependencies. This gives customers the flexibility to sell anywhere and in any way they want. This feature is more exciting for more complex or niche businesses that use the web, mobile, gaming, and other interactive mediums as storefronts to reach their customers. Shopify merchants have already started to use the Storefront API’s. For example, NTWRK hosted a live stream shopping show using Shopify’s Storefront API. Connect microservices to create personalized experiences Third-party shipping services can be used for blogs, storefront, and product pages or accurate shipment alerts Turn the world into your storefront Entrepreneurs can engage with their customers through vending machines, live streams, smart mirrors, voice shopping, and more. Speedy and scalable to have development teams work in parallel The flexible architecture will enable the development teams to create the experiences and storefronts, according to their vision. Interested merchants can create custom experiences on the customer storefront tools website. https://twitter.com/CarloTeran/status/1141475420904157185 Customer loyalty with retail shoppers With the new Shopify Point of Sale cart app extensions, users can apply and edit loyalty and promotional details directly from the customer cart. Apply discounts lightning-quick The number of clicks needed to apply a discount has been reduced to one, thus saving users 10 seconds for each sale on average. Important information at your fingertips Key customer details, from birthdays to reward milestones will surface automatically and in-context, so the merchants or their staff won’t have to navigate to the apps to get alerts. More flexibility Shopify’s app partners will give merchants the flexibility to pick a program that works best for their customer experience, like rewarding a long-time customer, online or in-store. Interested merchants can learn more about the apps on the POS Loyalty and Promotion Apps website. https://twitter.com/ShawnBouchard/status/1141483355252199425 Shopify Payments in multiple currencies and languages From this year onwards, merchants can run their business in their own preferred language. Shopify is already available in French, German, Japanese, Italian, Brazilian Portuguese, and Spanish. Additionally, Shopify will be available in 11 additional languages like Dutch, Simplified Chinese, and more. Also, Shopify Payments will enable selling in multiple currencies and will be globally available to all Shopify merchants later this year. The displayed prices will use simple rounding rules and automatically adjust based on current foreign -exchange rates. Soon shoppers will be able to convert between nine major currencies -GBP, AUD, CAD, EUR, HKD, JPY, NZD, SGD, and USD, thus using their preferred way to pay. https://twitter.com/anthonycook/status/1141392464818900996 Users of Shopify are obviously quite elated with all the announcements. People also touted this as Shopify’s way to combat Amazon, its main competitor in the market. https://twitter.com/tomfgoodwin/status/1141434508257964032 Visit the Shopify blog, for more details. Read More Why Retailers need to prioritize eCommerce Automation in 2019 5 things to consider when developing an eCommerce website Through the customer’s eyes: 4 ways Artificial Intelligence is transforming ecommerce
Read more
  • 0
  • 0
  • 29292
Modal Close icon
Modal Close icon