Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-facial-recognition-technology-is-faulty-racist-biased-abusive-to-civil-rights-act-now-to-restrict-misuse-say-experts-to-house-oversight-and-reform-committee
Vincy Davis
27 May 2019
6 min read
Save for later

‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee

Vincy Davis
27 May 2019
6 min read
Last week, the US House Oversight and Reform Committee held its first hearing on examining the use of ‘Facial Recognition Technology’. The hearing was an incisive discussion on the use of facial recognition by government and commercial entities, flaws in the technology, lack of regulation and its impact on citizen’s civil rights and liberties. The chairman of the committee, U.S. Representative of Maryland, Elijah Cummings said that one of the goals of this hearing about Facial Recognition Technology is to “protect against its abuse”. At the hearing, Joy Buolamwini, founder of Algorithmic Justice League highlighted one of the major pressing points for the failure of this technology as ‘misidentification’, that can lead to false arrests and accusations, a risk especially for marginalized communities. On one of her studies at MIT, on facial recognition systems, it was found that for the task of guessing a gender of a face, IBM, Microsoft and Amazon had error rates which rose to over 30% for darker skin and women. On evaluating benchmark datasets from organizations like NIST (National Institute for Standards and Technology), a striking imbalance was found. The dataset contained 75 percent male and 80 percent lighter skin data, which she addressed as “pale male datasets”. She added that our faces may well be the final frontier of privacy and Congress must act now to uphold American freedom and rights at minimum. Professor of Law at the University of the District of Columbia, Andrew G. Ferguson agreed with Buolamwini stating, “Congress must act now”, to prohibit facial recognition until Congress establishes clear rules. “The fourth amendment won’t save us. The Supreme Court is trying to make amendments but it’s not fast enough. Only legislation can react in real time to real time threats.” Another strong concern raised at the hearing was the use of facial recognition technology in law enforcement. Neema Singh Guliani, Senior Legislative Counsel of American Civil Liberties Union, said law enforcement across the country, including the FBI, is using face recognition in an unrestrained manner. This growing use of facial recognition is being done “without legislative approval, absent safeguards, and in most cases, in secret.” She also added that the U.S. reportedly has over 50 million surveillance cameras, this combined with face recognition threatens to create a near constant surveillance state. An important addition to regulating facial recognition technology was to include all kinds of biometric surveillance under the ambit of surveillance technology. This includes voice recognition and gait recognition, which is also being used actively by private companies like Tesla. This surveillance should not only include legislation, but also real enforcement so “when your data is misused you have actually an opportunity, to go to court and get some accountability”, Guliani added. She also urged the committee to investigate how FBI and other federal agencies are using this technology, whose accuracy has not been tested and how the agency is complying with the Constitution by “reportedly piloting Amazon's face recognition product”. Like FBI and other government agencies, even companies like Amazon and Facebook were heavily criticized by members of the committee for misusing the technology. It was notified that these companies look for ways to develop this technology and market facial recognition. On the same day of this hearing, came the news that Amazon shareholders rejected the proposal on ban of selling its facial recognition tech to governments. This year in January, activist shareholders had proposed a resolution to limit the sale of Amazon’s facial recognition tech called Rekognition to law enforcement and government agencies. This technology is regarded as an enabler of racial discrimination of minorities as it was found to be biased and inaccurate. Jim Jordan, U.S. Representative of Ohio, raised a concern as to how, “Some unelected person at the FBI talks to some unelected person at the state level, and they say go ahead,” without giving “any notification to individuals or elected representatives that their images will be used by the FBI.” Using face recognition in such casual manner poses “a unique threat to our civil rights and liberties”, noted Clare Garvie, Senior Associate of Georgetown University Law Center and Center on Privacy & Technology. Studies continue to show that the accuracy of face recognition varies on the race of the person being searched. This technology “makes mistakes and risks making more mistakes and more misidentifications of African Americans”. She asserted that “face recognition is too powerful, too pervasive, too susceptible to abuse, to continue being unchecked.” A general agreement by all the members was that a federal legislation is necessary, in order to prevent a confusing and potentially contradictory patchwork, of regulation of government use of facial recognition technology. Another point of discussion was how great facial recognition could work, if implemented in a ‘real’ world. It can help the surveillance and healthcare sector in a huge way, if its challenges are addressed correctly. Dr. Cedric Alexander, former President of National Organization of Black Law Enforcement Executives, was more cautious of banning the technology. He was of the opinion that this technology can be used by police in an effective way, if trained properly. Last week, San Francisco became the first U.S. city to pass an ordinance barring police and other government agencies from using facial recognition technology. This decision has attracted attention across the country and could be followed by other local governments. A council member Gomez made the committee’s stand clear that they, “are not anti-technology or anti-innovation, but we have to be very aware that we're not stumbling into the future blind.” Cummings concluded the hearing by thanking the witnesses stating, “I've been here for now 23 years, it's one of the best hearings I've seen really. You all were very thorough and very very detailed, without objection.” The second hearing is scheduled on June 4th, and will have law enforcement witnesses. For more details, head over to the full Hearing on Facial Recognition Technology by the House Oversight and Reform Committee. Read More Over 30 AI experts join shareholders in calling on Amazon to stop selling Rekognition, its facial recognition tech, for government surveillance Oakland Privacy Advisory Commission lay out privacy principles for Oaklanders and propose ban on facial recognition Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience?
Read more
  • 0
  • 0
  • 19827

article-image-implementing-hashing-algorithms-in-golang-tutorial
Sugandha Lahoti
27 May 2019
4 min read
Save for later

Implementing hashing algorithms in Golang [Tutorial]

Sugandha Lahoti
27 May 2019
4 min read
A hashing algorithm is a cryptographic hash technique. It is a scientific calculation that maps data with a subjective size to a hash with a settled size. It's intended to be a single direction function, that you cannot alter. This article covers hash functions and the implementation of a hashing algorithm in Golang. This article is taken from the book Learn Data Structures and Algorithms with Golang by Bhagvan Kommadi. Complete with hands-on tutorials, this book will guide you in using the best data structures and algorithms for problem-solving in Golang. Install Go version 1.10 for your OS. The GitHub URL for the code in this article is available here. The hash functions Hash functions are used in cryptography and other areas. These data structures are presented with code examples related to cryptography. There are two ways to implement a hash function in Go: with crc32 or sha256. Marshaling (changing the string to an encoded form) saves the internal state, which is used for other purposes later. A BinaryMarshaler (converting the string into binary form) example is explained in this section: //main package has examples shown // in Hands-On Data Structures and algorithms with Go book package main // importing bytes, crpto/sha256, encoding, fmt and log package import ( "bytes" "crypto/sha256" "encoding" "fmt" "log" "hash" ) The main method creates a binary marshaled hash of two example strings. The hashes of the two strings are printed. The sum of the first hash is compared with the second hash using the equals method on bytes. This is shown in the following code: //main method func main() { const ( example1 = "this is a example " example2 = "second example" ) var firstHash hash.Hash firstHash = sha256.New() firstHash.Write([]byte(example1)) var marshaler encoding.BinaryMarshaler var ok bool marshaler, ok = firstHash.(encoding.BinaryMarshaler) if !ok { log.Fatal("first Hash is not generated by encoding.BinaryMarshaler") } var data []byte var err error data, err = marshaler.MarshalBinary() if err != nil { log.Fatal("failure to create first Hash:", err) } var secondHash hash.Hash secondHash = sha256.New() var unmarshaler encoding.BinaryUnmarshaler unmarshaler, ok = secondHash.(encoding.BinaryUnmarshaler) if !ok { log.Fatal("second Hash is not generated by encoding.BinaryUnmarshaler") } if err := unmarshaler.UnmarshalBinary(data); err != nil { log.Fatal("failure to create hash:", err) } firstHash.Write([]byte(example2)) secondHash.Write([]byte(example2)) fmt.Printf("%x\n", firstHash.Sum(nil)) fmt.Println(bytes.Equal(firstHash.Sum(nil), secondHash.Sum(nil))) } Run the following command to execute the hash.go file: go run hash.go The output is as follows: Hash implementation in Go Hash implementation in Go has crc32 and sha256 implementations. An implementation of a hashing algorithm with multiple values using an XOR transformation is shown in the following code snippet. The CreateHash function takes a byte array, byteStr, as a parameter and returns the sha256 checksum of the byte array: //main package has examples shown // in Go Data Structures and algorithms book package main // importing fmt package import ( "fmt" "crypto/sha1" "hash" ) //CreateHash method func CreateHash(byteStr []byte) []byte { var hashVal hash.Hash hashVal = sha1.New() hashVal.Write(byteStr) var bytes []byte bytes = hashVal.Sum(nil) return bytes } In the following sections, we will discuss the different methods of hash algorithms. The CreateHashMutliple method The CreateHashMutliple method takes the byteStr1 and byteStr2 byte arrays as parameters and returns the XOR-transformed bytes value, as follows: // Create hash for Multiple Values method func CreateHashMultiple(byteStr1 []byte, byteStr2 []byte) []byte { return xor(CreateHash(byteStr1), CreateHash(byteStr2)) } The XOR method The xor method takes the byteStr1 and byteStr2 byte arrays as parameters and returns the XOR-transformation result, as follows: The main method The main method invokes the createHashMutliple method, passing Check and Hash as string parameters, and prints the hash value of the strings, as follows: // main method func main() { var bytes []byte bytes = CreateHashMultiple([]byte("Check"), []byte("Hash")) fmt.Printf("%x\n", bytes) } Run the following command to execute the hash.go file: go run hash.go The output is as follows: In this article, we discussed hashing algorithms in Golang alongside code examples and performance analysis. To know more about network representation using graphs and sparse matrix representation using a list of lists in Go, read our book Learn Data Structures and Algorithms with Golang. Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work. State of Go February 2019 - Golang developments report for this month released The Golang team has started working on Go 2 proposals
Read more
  • 0
  • 0
  • 38998

article-image-react-native-vs-xamarin-which-is-the-better-cross-platform-mobile-development-framework
Guest Contributor
25 May 2019
10 min read
Save for later

React Native VS Xamarin: Which is the better cross-platform mobile development framework?

Guest Contributor
25 May 2019
10 min read
One of the most debated topics of the current mobile industry is the battle of the two giant app development platforms, Xamarin and React Native. Central to the buzzing hype of this battle and the increasing popularity of these two platforms are the communities of app developers built around them. Both of these open-source app development platforms are preferred by the app development community to create highly efficient applications while saving time and efforts of the app developers. Both React and Xamarin are supported by some merits and demerits, which makes selecting the best between the two a bit difficult. When it comes to selecting the appropriate mobile application platform, it boils down to the nature, needs and overall objectives of the business and company.  It also comes down to the features and characteristics of that technology, which either make it the best fit for one project or the worst approach for another. With that being said, let’s start with our comparing the two to find out the major differences, explore the key considerations and determine the winner of this unending platform battle. An overview of Xamarin An open-source cross-platform used for mobile application development, Xamarin can be used to build applications for Android, iOS and wearable devices. Offered as a high-tech enterprise app development tool within Microsoft Visual Studio IDE, Xamarin has now become one of the top mobile app development platforms used by various businesses and enterprises. Apart from being a free app development platform, it facilitates the development of mobile applications while using a single programming language, namely C#, for both the Android and iOS versions. Key features Since the day of its introduction, Xamarin has been using C#. C# is a popular programming language in the Microsoft community, and with great features like metaprogramming, functional programming and portability, C# is widely-preferred by many web developers. Xamarin makes it easy for C# developers to shift from web development platform to cross mobile app development platform. Features like portable class libraries, code sharing features, testing clouds and insights, and compatibility with Mac IDE and Visual Studio IDE makes Xamarin a great development tool with no additional costs. Development environment Xamarin provides app developers with a comprehensive app development toolkit and software package. The package includes highly compatible IDEs (for both Mac and VS), distribution and analytics tools such as Hockeyapp and testing tools such as Xamarin Test Cloud. With Xamarin, developers no longer have to invest their time and money in incorporating third-party tools. It uses Mono execution environment for both the platforms, i.e. Android and iOS. Framework C# has matured from its infancy, and the Xamarin framework now provides strong-safety typing which ensures prevention of unexpected code behavior. Since C# supports .NET framework, the language can be used with numerous .NET features like ASynC, LINQ, and Lambdas. Compilation Xamarin.iOS and Xamarin.Android are the two major products offered by this platform. In case of iOS code compilation, the platform follows Ahead-of-Time compilation whereas in Android Just-in-Time compilation approach is followed. However, the compilation process is fully automated and is equipped with features to tackle and resolve issues like memory allocation and garbage collection. App working principles Xamarin has an MVVM architecture coupled with a two-way data binding which provides great support for collaborative work among different departments. If your development approach doesn’t follow a strict performance-oriented approach, then go for Xamarin as it provides high process flexibility. How exactly does it work? Not only does C# form the basis of this platform, but it also provides developers with access to React Native APIs. This feature of Xamarin enables it to create universal backend code that can be used with any UI based on React Native SDK. An overview of React Native With Facebook being the creator of this platform, React Native is one of the widely-used programming platforms. From enabling mobile developers to build highly efficient apps to ensure great quality and increased sustainability, the demand for React Native apps is sure to increase over time. Key features React Native apps for the Android platform uses Java while the iOS version of the same app uses C#. The platforms provide numerous built-in tools, libraries, and frameworks. Its standout feature of hot reloading enables developers to make amendments to the code without spending much time on code compilation process. Development environment The React Native app development platform requires developers to follow a wide array of actions and processes to build a UI. The platform supports easy and faster iterations while enabling execution of a different code even when the application is running. Since React Native doesn’t provide support for 64-bit, it does impact the run time and speed of codes in iOS. Architecture React Native app development platform supports modular architecture. This means that developers can categorize the code into different functional and independent blocks of codes. This characteristic of the React Native platform, therefore, provides process flexibility, ease of upgrade and application updates. Compilation The Reactive Native app development platform follows and supports Just-in-Time compilation for Android applications. Whereas, in case of iOS application Just-in-Time compilation is not available as it might slow down the code execution procedure. App working principles This platform follows a one-way data binding approach which helps in boosting the overall performance of the application. However, through manual implementation, two-way data binding approach can be implemented which is useful for introducing code coherence and in reducing complex errors. How does it actually work? React Native enables developers to build applications using React and JavaScript. The working of a React Native application can be described as thread-based interaction. One thread handles the UI and user gestures while the other is React Native specific and deals with the application’s business logic. It also determines the structure and functionality of the overall user interface. The interaction could be asynchronous, batched or serializable. Learning curves of Xamarin and React Native To master Xamarin one has to be skilled in .NET. Xamarin provides you with easy and complete access to SDK platform capabilities because of Xamarin.iOS and Xamarin.Android libraries.  Xamarin provides a complete package which reduces the need of integrating third-party tools and libraries-- so to become a professional in Xamarin app development all you need is skills and expertise in C#, .NET and some basic working knowledge of React Native classes. While on the other hand, mastering React Native requires thorough knowledge and expertise of JavaScript. Since the platform doesn’t offer well-integrated libraries and tools, knowledge and expertise of third-party sources and tools are of core importance. Key differences between Xamarin and React Native While Trello, Slack, and GitHub use Xamarin, other successful companies like Facebook, Walmart, and Instagram have React Native-based mobile applications. While React, Native application offers better performance, not every company can afford to develop an app for each platform. Cross platforms like Xamarin are the best alternative to React Native apps as they offer higher development flexibility. Where Xamarin offers multiple platform support, cost-effectiveness and time-saving, React Native allows faster development and increased efficiency. Since Xamarin provides complete hardware support, the issues of hardware compatibility are reduced. React Native, on the other hand, provides you with ready-made components which reduce the need for writing the entire code from scratch. In React Native, with integration and after investment in third-party libraries and plugins, the need for WebView functions is eliminated which in turn reduces the memory requirements. Xamarin, on the other hand, provides you with a comprehensive toolkit with zero investments on additional plugins and third-party sources. However, this cross-platform offers restricted access to open-source technologies. A good quality React Native application requires more than a few weeks to develop which increases not only the development time but also the app complexity. If time-consumption is one of the drawbacks of the React Native app, then additional optimization for supporting larger application counts as a limitation for Xamarin. While frequent update contributes in shrinkage of the customer base of the React Native app, then stability complaints and app crashes are some common issues with Xamarin applications. When to go for Xamarin? Case #1: The foremost advantage of Xamarin is that all you need is command over C# and .NET. Case #2: One of the most exciting trends currently in the mobile development industry is the Internet of Things. Considering the rapid increase in need and demand of IoT, if you are developing a product that involves multiple hardware capacities and user devices then make developing with Xamarin your number one priority. Xamarin is fully compatible with numerous IoT devices which eliminates the need for a third-party source for functionality implementation. Case #3: If you are budget-constricted and time-bound then Xamarin is the solution to all your app development worries. Since the backend code for both Android and iOS is similar, it reduces the development time and efforts and is budget friendly. Case #4: The revolutionary and integral test cloud is probably the best part about Xamarin. Even though Test Cloud might take up a fraction of your budget, this expense is worth investing in. The test cloud not only recreates the activity of actual users but it also ensures that your application works well on various devices and is accessible to maximum users. When to go for React Native? Case #1: When it comes to game app development, Xamarin is not a wise choice. Since it supports C# framework and AOT compilation, getting speedy results and rendering is difficult with Xamarin. A Gaming application is updated dynamically, highly interactive and has high-performance graphics; the drawback of zero compatibility with heavy graphics makes Xamarin a poor choice in game app development. For these very reasons, many developers go for React Native when it comes to developing high-performing gaming applications. Case #2: The size of the application is an indirect indicator of the success of application among targeted users. Since many smartphone users have their own photos and video stuffed in their phone’s memory, there is barely any memory and storage left for an additional application. Xamarin-based apps are relatively heavier and occupy more space than their React Native counterparts. Wondering which framework to choose? Xamarin and React Native are the two major players of the mobile app development industry. So, it’s entirely up to you whether you want to proceed with React Native or Xamarin. However, your decision should be based on the type of application, requirements and development cost. If you want a faster development process go for the Xamarin and if you are developing a game, e-commerce or social site go for React Native. Author Bio Khalid Durrani is an Inbound Marketing Expert and a content strategist. He likes to cover the topics related to design, latest tech, startups, IOT, Artificial intelligence, Big Data, AR/VR, UI/UX and much more. Currently, he is the global marketing manager of LogoVerge, an AI-based design agency. The Ionic team announces the release of Ionic React Beta React Native 0.59 RC0 is now out with React Hooks, and more Changes made to React Native Community’s GitHub organization in 2018 for driving better collaboration
Read more
  • 0
  • 0
  • 50885

article-image-is-golang-truly-community-driven-and-does-it-really-matter
Sugandha Lahoti
24 May 2019
6 min read
Save for later

Is Golang truly community driven and does it really matter?

Sugandha Lahoti
24 May 2019
6 min read
Golang, also called Go, is a statically typed, compiled programming language designed by Google. Golang is going from strength to strength, as more engineers than ever are using it at work, according to Go User Survey 2019. An opinion that has led to the Hacker News community into a heated debate last week: “Go is Google's language, not the community's”. The thread was first started by Chris Siebenmann who works at the Department of Computer Science, University of Toronto. His blog post reads, “Go has community contributions but it is not a community project. It is Google's project.” Chris explicitly states that the community's voice doesn't matter very much for Go's development, and we have to live with that. He argues that Google is the gatekeeper for community contributions; it alone decides what is and isn't accepted into Go. If a developer wants some significant feature to be accepted into Golang, working to build consensus in the community is far less important than persuading the Golang core team. He then cites the example of how one member of Google's Go core team discarded the entire Go Modules system that the Go community had been working on and brought in a relatively radically different model. Chris believes that the Golang team cares about the community and want them to be involved, but only up to a certain point. He wants the Go core team to be bluntly honest about the situation, rather than pretend and implicitly lead people on. He further adds, “Only if Go core team members start leaving Google and try to remain active in determining Go's direction, can we [be] certain Golang is a community-driven language.” He then compares Go with C++, calling the latter a genuine community-driven language. He says there are several major implementations in C++ which are genuine community projects, and the direction of C++ is set by an open standards committee with a relatively distributed membership. https://twitter.com/thatcks/status/1131319904039309312 What is better - community-driven or corporate ownership? There has been an opinion floating around developers about how some open source programming projects are just commercial projects driven mainly by a single company.  If we look at the top open source projects, most of them have some kind of corporate backing ( Apple’s Swift, Oracle’s Java, MySQL, Microsoft’s Typescript, Google’s Kotlin, Golang, Android, MongoDB, Elasticsearch) to name a few. Which brings us to the question, what does corporate ownership of open source projects really mean? A benevolent dictatorship can have two outcomes. If the community for a particular project suggests a change, and in case a change is a bad idea, the corporate team can intervene and stop changes. On the other hand, though, it can actually stop good ideas from the community in being implemented, even if a handful of members from the core team disagree. Chris’s post has received a lot of attention by developers on Hacker News who both sided with and disagreed with the opinion put forward. A comment reads, “It's important to have a community and to work with it, but, especially for a programming language, there has to be a clear concept of which features should be implemented and which not - just accepting community contributions for the sake of making the community feel good would be the wrong way.” Another comment reads, “Many like Go because it is an opinionated language. I'm not sure that a 'community' run language will create something like that because there are too many opinions. Many claims to represent the community, but not the community that doesn't share their opinion. Without clear leaders, I fear technical direction and taste will be about politics which seems more uncertain/risky. I like that there is a tight cohesive group in control over Go and that they are largely the original designers. I might be more interested in alternative government structures and Google having too much control only if those original authors all stepped down.” Rather than splitting between Community or Corporate, a more accurate representation would be how much market value is depending on those projects. If a project is thriving, usually enterprises will take good decisions to handle it. However, another but entirely valid and important question to ask is ‘should open source projects be driven by their market value?’ Another common argument is that the core team’s full-time job is to take care of the language instead of taking errant decisions based on community backlash. Google (or Microsoft, or Apple, or Facebook for that matter) will not make or block a change in a way that kills an entire project. But this does not mean they should sit idly, ignoring the community response. Ideally, the more that a project genuinely belongs to its community, the more it will reflect what the community wants and needs. Google also has a propensity to kill its own products. What happens when Google is not as interested in Golang anymore? The company could leave it to the community to figure out the governance model suddenly by pulling off the original authors into some other exciting new project. Or they may let the authors only work on Golang in their spare time at home or at the weekends. While Google's history shows that many of their dead products are actually an important step towards something better and more successful, why and how much of that logic would be directly relevant to an open source project is something worth thinking about. As a Hacker news user wrote, “Go is developed by Bell Labs people, the same people who bought us C, Unix and Plan 9 (Ken, Pike, RSC, et al). They took the time to think through all their decisions, the impacts of said decisions, along with keeping things as simple as possible. Basically, doing things right the first time and not bolting on features simply because the community wants them.” Another says, “The way how Golang team handles potentially tectonic changes in language is also exemplary – very well communicated ideas, means to provide feedback and clear explanation of how the process works.” Rest assured, if any major change is made to Go, even a drastic one such as killing it, it will not be done without consulting the community and taking their feedback. Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch State of Go February 2019 – Golang developments report for this month released
Read more
  • 0
  • 0
  • 29244

article-image-github-sponsors-could-corporate-strategy-eat-foss-culture-for-dinner
Sugandha Lahoti
24 May 2019
4 min read
Save for later

Github Sponsors: Could corporate strategy eat FOSS culture for dinner?

Sugandha Lahoti
24 May 2019
4 min read
Yesterday, at the GitHub Satellite event 2019, GitHub launched probably its most game-changing yet debatable feature - Sponsors. GitHub Sponsors works exactly like Patreon, in the sense that developers can sponsor the efforts of a contributor "seamlessly through their GitHub profiles". Developers will be able to opt into having a “Sponsor me” button on their GitHub repositories and open-source projects where they will be able to highlight their funding models. GitHub shared that they will cover payment processing fees for the first 12 months of the program to celebrate the launch. “100% percent of your sponsorship goes to the developer," GitHub wrote in an announcement. At launch, this feature is marked as "wait list" and is currently in beta. To start off this program, the code hosting site has also launched GitHub Sponsors Matching Fund. This means that it will match all contributions up to $5,000 during a developer’s first year in GitHub Sponsors. GitHub sponsors could prove beneficial for developers working on open source software, that isn't profitable. This way they can easily raise money from GitHub directly which is the leading repository for open-source software. More importantly, GitHub sponsors is not just limited to software developers, but all open-source contributors, including those who write documentation, provide leadership or mentor new developers, for example. This and the promising zero fees to use the program has got people excited. https://twitter.com/rauchg/status/1131807348820008960 https://twitter.com/EricaJoy/status/1131640959886741504 While on the flip side, GitHub Sponsors can also limit the essence of what open source is, by financially influencing developers on what they will work on. It may drive open-source developers to focus on projects that are more likely to attract financial contributions over projects which are more interesting and challenging but aren’t likely to find financial backers on GitHub. This can hurt FOSS contributions as people start to expect to be paid rather than doing it for inherent motivations. This, in turn, could lead to toxic politics among project contributors regarding who gets credit and who gets paid. Companies could also use GitHub sponsorships to judge the health of open source projects. People are also speculating that this can possibly be Microsoft’s (GitHub’s parent company) strategy to centralize and enclose open source community dynamics, as well as benefit from its monetization. Some are also wondering the plausible effects of monetization on OSS, which can possibly lead to mega corporations profiteering off free labor, thus changing the original vision of an open source community. https://twitter.com/andrestaltz/status/1131521807876591616 Andre Staltz also made an interesting point about the potential on the zero fee model driving out other open source payment models from existence. He believes once Microsoft’s dominance is achieved Github's commissions could go up. https://twitter.com/andrestaltz/status/1131526433027837952 A Hacker News user also conjectured that this may also get Microsoft access to data on top-notch developers. “Will this mean that Microsoft gets a bunch of PII on top-notch developers (have to enter name + address info to receive or send payments), and get much more value from that data than I can imagine?” At present GitHub is offering this feature as an invite-only beta with a waitlist, it will be interesting to see if and how this will change the dynamics of open source collaboration, once it rolls out fully. A tweet observes: “I think it bears repeating that the path to FOSS sustainability is not individuals funding projects. We will only reach sustainability when the companies making profit off our work are returning value to the Commons.” Read our full coverage on GitHub Satellite here. To know more about GitHub sponsors, visit the official blog. GitHub Satellite 2019 focuses on community, security, and enterprise GitHub announces beta version of GitHub Package Registry, its new package management service GitHub deprecates and then restores Network Graph after GitHub users share their disapproval
Read more
  • 0
  • 0
  • 23414

article-image-postgresql-12-beta-1-released
Fatema Patrawala
24 May 2019
6 min read
Save for later

PostgreSQL 12 Beta 1 released

Fatema Patrawala
24 May 2019
6 min read
The PostgreSQL Global Development Group announced yesterday its first beta release of PostgreSQL 12. It is now also available for download. This release contains previews of all features that will be available in the final release of PostgreSQL 12, though some details of the release could also change. PostgreSQL 12 feature highlights Indexing Performance, Functionality, and Management PostgreSQL 12 will improve the overall performance of the standard B-tree indexes with improvements to the space management of these indexes as well. These improvements also provide a reduction of index size for B-tree indexes that are frequently modified, in addition to a performance gain. Additionally, PostgreSQL 12 adds the ability to rebuild indexes concurrently, which lets you perform a REINDEX operation without blocking any writes to the index. This feature should help with lengthy index rebuilds that could cause downtime when managing a PostgreSQL database in a production environment. PostgreSQL 12 extends the abilities of several of the specialized indexing mechanisms. The ability to create covering indexes, i.e. the INCLUDE clause that was introduced in PostgreSQL 11, has now been added to GiST indexes. SP-GiST indexes now support the ability to perform K-nearest neighbor (K-NN) queries for data types that support the distance (<->) operation. The amount of write-ahead log (WAL) overhead generated when creating a GiST, GIN, or SP-GiST index is also significantly reduced in PostgreSQL 12, which provides several benefits to the disk utilization of a PostgreSQL cluster and features such as continuous archiving and streaming replication. Inlined WITH queries (Common table expressions) Common table expressions (or WITH queries) can now be automatically inlined in a query if they: a) are not recursive b) do not have any side-effects c) are only referenced once in a later part of a query This removes an "optimization fence" that has existed since the introduction of the WITH clause in PostgreSQL 8.4 Partitioning PostgreSQL 12 while processing tables with thousands of partitions for operations, it only needs to use a small number of partitions. This release also provides improvements to the performance of both INSERT and COPY into a partitioned table. ATTACH PARTITION can now be performed without blocking concurrent queries on the partitioned table. Additionally, the ability to use foreign keys to reference partitioned tables is now permitted in PostgreSQL 12. JSON path queries per SQL/JSON specification PostgreSQL 12 now allows execution of JSON path queries per the SQL/JSON specification in the SQL:2016 standard. Similar to XPath expressions for XML, JSON path expressions let you evaluate a variety of arithmetic expressions and functions in addition to comparing values within JSON documents. A subset of these expressions can be accelerated with GIN indexes, allowing the execution of highly performant lookups across sets of JSON data. Collations PostgreSQL 12 now supports case-insensitive and accent-insensitive comparisons for ICU provided collations, also known as "nondeterministic collations". When used, these collations can provide convenience for comparisons and sorts, but can also lead to a performance penalty as a collation may need to make additional checks on a string. Most-common Value Extended Statistics CREATE STATISTICS, introduced in PostgreSQL 12 to help collect more complex statistics over multiple columns to improve query planning, now supports most-common value statistics. This leads to improved query plans for distributions that are non-uniform. Generated Columns PostgreSQL 12 allows the creation of generated columns that compute their values with an expression using the contents of other columns. This feature provides stored generated columns, which are computed on inserts and updates and are saved on disk. Virtual generated columns, which are computed only when a column is read as part of a query, are not implemented yet. Pluggable Table Storage Interface PostgreSQL 12 introduces the pluggable table storage interface that allows for the creation and use of different methods for table storage. New access methods can be added to a PostgreSQL cluster using the CREATE ACCESS METHOD command and subsequently added to tables with the new USING clause on CREATE TABLE. A table storage interface can be defined by creating a new table access method. In PostgreSQL 12, the storage interface that is used by default is the heap access method, which is currently is the only built-in method. Page Checksums The pg_verify_checkums command has been renamed to pg_checksums and now supports the ability to enable and disable page checksums across a PostgreSQL cluster that is offline. Previously, page checksums could only be enabled during the initialization of a cluster with initdb. Authentication & Connection Security GSSAPI now supports client-side and server-side encryption and can be specified in the pg_hba.conf file using the hostgssenc and hostnogssencrecord types. PostgreSQL 12 also allows for discovery of LDAP servers based on DNS SRV records if PostgreSQL was compiled with OpenLDAP. Few noted behavior changes in PostgreSQL 12 There are several changes introduced in PostgreSQL 12 that can affect the behavior as well as management of your ongoing operations. A few of these are noted below; for other changes, visit the "Migrating to Version 12" section of the release notes. The recovery.conf configuration file is now merged into the main postgresql.conf file. PostgreSQL will not start if it detects thatrecovery.conf is present. To put PostgreSQL into a non-primary mode, you can use the recovery.signal and the standby.signal files. You can read more about archive recovery here: https://www.postgresql.org/docs/devel/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVE-RECOVERY Just-in-Time (JIT) compilation is now enabled by default. OIDs can no longer be added to user created tables using the WITH OIDs clause. Operations on tables that have columns that were created using WITH OIDS (i.e. columns named "OID") will need to be adjusted. Running a SELECT * command on a system table will now also output the OID for the rows in the system table as well, instead of the old behavior which required the OID column to be specified explicitly. Testing for Bugs & Compatibility The stability of each PostgreSQL release greatly depends on the community, to test the upcoming version with the workloads and testing tools in order to find bugs and regressions before the general availability of PostgreSQL 12. As this is a Beta, minor changes to database behaviors, feature details, and APIs are still possible. The PostgreSQL team encourages the community to test the new features of PostgreSQL 12 in their database systems to help eliminate any bugs or other issues that may exist. A list of open issues is publicly available in the PostgreSQL wiki. You can report bugs using this form on the PostgreSQL website: Beta Schedule This is the first beta release of version 12. The PostgreSQL Project will release additional betas as required for testing, followed by one or more release candidates, until the final release in late 2019. For further information please see the Beta Testing page. Many other new features and improvements have been added to PostgreSQL 12. Please see the Release Notes for a complete list of new and changed features. PostgreSQL 12 progress update Building a scalable PostgreSQL solution PostgreSQL security: a quick look at authentication best practices [Tutorial]
Read more
  • 0
  • 0
  • 24274
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-5-reasons-node-js-developers-might-actually-love-using-azure-sponsored-by-microsoft
Richard Gall
24 May 2019
6 min read
Save for later

5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft]

Richard Gall
24 May 2019
6 min read
If you’re a Node.js developer, it might seem odd to be thinking about Azure. However, as the software landscape becomes increasingly cloud native, it’s well worth thinking about the cloud solution you and your organization uses. It should, after all, make life easier for you as much as it should help your company scale and provide better services and user experiences for customers. We don’t often talk about it, but cloud isn’t one thing: it’s a set of tools that provide developers with new ways of building and managing apps. It helps you experiment and learn. In the development of Azure, developer experience is at the top of the agenda. In many ways the platform represents Microsoft’s transformation as an organization, from one that seemed to distrust open source developers, to one which is hell bent on making them happier and more productive. So, if you’re a Node.js developer - or any kind of JavaScript developer, for that matter - reading this with healthy scepticism (and why wouldn’t you), let’s look at some of the reasons and ways Azure can support you in your work... This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free courtesy of Microsoft here. Deploy apps quickly with Azure App Service  As a developer, deploying applications quickly is one of your top priorities. Azure does that thanks to Azure App Service. Essentially, Azure App Service is a PaaS that brings together a variety of other Azure services and resources helping you to develop and host applications without worrying about your infrastructure.   There are lots of reasons to love Azure App Service, not least the speed with which it allows you to get up and running, but most importantly it gives application developers access to a range of Azure features, such as load balancing and security, as well as the platforms integrations with tools for DevOps processes.   Azure App Service works for developers working with a range of platforms, from Python to PHP - Node.js developers that want to give it a try should start here.   Manage application and infrastructure resources with the Azure CLI The Azure CLI is a useful tool for managing cloud resources. It can also be used to deploy an application quickly. If you’re a developer that likes working with the CLI, this feature really does offer a nice way of working, allowing you to easily move between each step in the development and deployment process. If you want to try deploying a Node.js application using the Azure CLI, check out this tutorial, or learn more about the Azure CLI here. Go serverless with Azure Functions Serverless has been getting serious attention over the last 18 months. While it’s true that serverless is a hyped field, and that in reality there are serious considerations to be made about how and where you choose to run your software, it’s relatively easy to try it out for yourself using Azure. In fact, the name itself is useful in demystifying serverless. The word ‘functions’ is a much more accurate description what you’re doing as a developer. A function is essentially a small piece of code that runs in the cloud that execute certain actions or tasks in specific situations. There are many reasons to go serverless, from a pay per use pricing model to support for your preferred dependencies. And while there are plenty of options in terms of cloud providers, Azure is worth exploring because it makes it so easy for developers to leverage. Learn more about Azure Functions here. Simple, accessible dashboards for logging and monitoring In 2019 building more reliable and observable systems will expand from the preserve of SREs and become something developers are accountable for too. This is the next step in the evolution of software engineering, as new silos are broken down. It’s for this reason that the monitoring tools offered by Azure could prove to be so valuable for developers. With Application Insights and Azure Monitor, you can gain the level of transparency you need to properly manage your application. Learn how to successfully monitor a Node.js app here. Build and deploy applications with Azure DevOps DevOps shouldn’t really require additional effort and thinking - but more often than not it does. Azure is a platform that appears to understand this implicitly, and the team behind it have done a lot to make it easier to cultivate a DevOps culture with several useful tools and integrations. Azure Test Plans is a toolkit for testing applications, which can seriously help you improve the way you test in your development processes, while Azure Boards can support project management from inside the Azure ecosystem - useful if you’re looking for a new way to manage agile workflows. But perhaps the most important feature within Azure DevOps - for developers at least - is Azure Pipelines. Azure Pipelines is particularly useful for JavaScript developers as it gives you the option to run a build pipeline on a Microsoft hosted agent that has a wide range of common JavaScript tools (like Yarn and Gulp) pre-installed. Microsoft claims this is the “simplest way to build and deploy” because “maintenance and upgrades are taken care of for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use.” Find out how to build, test, and deploy Node.js apps using Azure Pipelines with this tutorial. Read next: 5 developers explain why they use Visual Studio Code Conclusion: Azure is a place to experiment and learn We often talk about cloud as a solution or service. And although it can provide solutions to many urgent problems, it’s worth remembering that cloud is really a set of many different tools. It isn’t one thing. Because of this, cloud platforms like Azure are as much places to experiment and try out new ways of working as it is simply someone else’s server space. With that in mind, it could be worth experimenting with Azure to try out new ideas - after all, what’s the worst that can happen? More than anything, cloud native should make development fun. Find out how to get started with Node.js on Azure. Download Learning Node.js with Azure for free from Microsoft.
Read more
  • 0
  • 0
  • 37191

article-image-github-satellite-2019-focuses-on-community-security-and-enterprise
Bhagyashree R
24 May 2019
6 min read
Save for later

GitHub Satellite 2019 focuses on community, security, and enterprise

Bhagyashree R
24 May 2019
6 min read
Yesterday, GitHub hosted its annual product and user conference, GitHub Satellite 2019, in Berlin, Germany. Along with introducing a bunch of tools for better reliability and collaboration, this year GitHub also announced a new platform for funding contributors to a project. The announcements were focused on three areas: community, security, and enterprise. Here are some of the key takeaways from the event: Community: Financial support for open source developers GitHub has launched a new feature called GitHub Sponsors, which allows any developer to sponsor the efforts of a contributor "seamlessly through their GitHub profiles". At launch, this feature is marked as "wait list" and is currently in beta. GitHub shared that it will not be charging any fees for using this feature and will also cover the processing fees for the first year of the program. "We’ll also cover payment processing fees for the first 12 months of the program to celebrate the launch. 100% percent of your sponsorship goes to the developer," GitHub wrote in an announcement. To start off this program, the code hosting site has also launched GitHub Sponsors Matching Fund. This means that it will match all contributions up to $5,000 during a developer’s first year in GitHub Sponsors. It would be an understatement if I say that this was one of the biggest announcements at the GitHub Satellite event. https://twitter.com/EricaJoy/status/1131640959886741504 https://twitter.com/patrickc/status/1131556816721080324 GitHub also announced Tidelift as a launch partner with over 4,000 open source projects on GitHub, eligible for income from Tidelift through GitHub Sponsors. In a blog post, Tidelift wrote, “Over the past year, we’ve seen the rapid rise of a broad-based movement to pay open source maintainers for the value they create. The attention that GitHub brings to this effort should only accelerate this momentum. And it just makes sense—paying the maintainers for the value they create, we ensure the vitality of the software at the heart of our digital society.” Read the official blog on GitHub Sponsors for more information. Security: “It’s more important than ever that every developer becomes a security developer.” The open source community is driven by the culture of collaboration and trust. Nearly every application that is built today has some dependence on open source software. This is its biggest advantage as it saves you from reinventing the wheel. But, what if someone in this dependence cycle misuses the trust and leaks a malware into your application? Sounds like a nightmare, right? To address this, GitHub announced a myriad of security features at GitHub Satellite that will make it easy for developers to ensure code safety: Broaden security vulnerability alerts So far, security vulnerability alerts were shown for projects written in .NET, Java, JavaScript, Python, and Ruby. GitHub with WhiteSource has now expanded this feature to detect potential security vulnerabilities in open source projects in other languages as well. Whitesource is an open source security and license compliance management platform, which has developed an “Open Source Software Scanning” that scans the open source components of your project. The alerts will also be more detailed to enable developers to assess and mitigate the vulnerabilities. Dependency insights Through dependency insights, developers will be able to quickly view vulnerabilities, licenses, and other important information for the open source projects their organization depends on. This will come in handy when auditing dependencies and their exposure when a security vulnerability is released publicly. This feature leverages dependency graph giving enterprises full visibility into their dependencies including details on security vulnerabilities and open source licenses. Token scanning GitHub announced the general availability of token scanning at GitHub Satellite, a feature that enables GitHub to scan public repositories for known token formats to prevent fraudulent use of credentials that happen accidentally. It now supports more token formats including Alibaba Cloud, Mailgun, and Twilio. Automated security fixes with Dependabot To make it easier for developers to update their project's dependencies, GitHub will now come integrated with Dependabot, as announced at GitHub Satellite. This will allow GitHub to check your dependencies for known security vulnerabilities. It will then automatically open pull requests to update them to the minimum possible secure. These automated security requests will contain information about the vulnerability like release notes, changelog entries, and commit details. Maintainer security advisories (beta) GitHub now provides open source maintainers a private workspace where they can discuss, fix, and publish security advisories. You can find the security advisories in your dependencies using the "Security" tab on the GitHub interface. More GitHub security updates announced at GitHub Satellite available here. Enterprise: Becoming an "open source enterprise" The growing collaboration between enterprises and the open source community has enabled innovation at scale. To further make this collaboration easier GitHub has introduced several improvements to its Enterprise offering at GitHub Satellite: Enterprise account connects organizations to collaborate and build inner source workflows. Its new admin center meets security and compliance needs with global administration and policy enforcement. Two new user roles, Triage and Maintain, allows enterprise teams to secure and address their access control needs. Now administrators can recruit help, like triaging issues or managing users, from trusted contributors without also granting the ability to write to the repository or to change repository settings. Enterprises can now add groups from their identity provider to a team within GitHub and automatically keep membership synchronized. Enterprises can create internal repositories that are visible only to their developers. This can help them reuse code and build communities within their company. GitHub Enterprise Cloud administrators can access audit log events using GraphQL API to analyze data on user access, team creation, and more. Enterprises can create a draft pull request to ask for input, get feedback on an approach, and refine work before it’s ready for review. Customers will also be protected for their use of GitHub from claims alleging that GitHub products or services infringe third-party IP rights. Learn more about GitHub Enterprise offering here. These are the major updates. For detailed coverage, we recommend you watch the complete GitHub Satellite event that was live streamed yesterday. Next, for Github, is the GitHub Universe conference taking place November 13-14 at San Francisco. GitHub announces beta version of GitHub Package Registry, its new package management service GitHub deprecates and then restores Network Graph after GitHub users share their disapproval Apache Software Foundation finally joins the GitHub open source community
Read more
  • 0
  • 0
  • 17060

article-image-amazon-rejects-all-11-shareholder-proposals-including-the-employee-led-climate-resolution-at-annual-shareholder-meeting
Fatema Patrawala
23 May 2019
8 min read
Save for later

Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting

Fatema Patrawala
23 May 2019
8 min read
Amazon shareholders on Wednesday in an annual company meet voted down a proposal from more than 7,500 Amazon employees for climate justice. Employees in the proposal have put forward demands for Jeff Bezos to create a comprehensive climate-change plan for the company. Last week employees also received support from two of the largest proxy advisors to institutional investors, which agreed that shareholders should vote Yes for the proposal. But yesterday these efforts faced a major setback when the shareholders did not pass the proposal. Amazon’s annual proxy statement included 11 resolutions including Amazon’s controversial facial recognition technology, demands for more action on climate change, salary transparency, and other equity issues. According to reports from Geekwire, all 11 resolutions have been voted down by the shareholders. Amazon did not release shareholder vote totals yesterday but said information would be filed with the U.S. Securities and Exchange Commission later this week. Listed below are the 11 resolutions presented at the meeting: Food waste: Shareholders want Amazon to issue an annual report on the environmental and social impacts of food waste generated by the company. As part of the report, they’re asking Amazon to study the feasibility of setting new goals for reducing food waste and working toward them. Special shareholder meetings: The shareholders behind this resolution wanted to amend Amazon’s bylaws to make it easier to call special shareowner meetings. Specifically, they want to give shareholders with an aggregate of 20 percent of the company’s outstanding stock that authority. Amazon currently only allows shareholders with 30 percent of company shares to call a special meeting, according to the resolution. Facial recognition: This resolution would prevent Amazon from selling its controversial facial recognition technology to government agencies without board approval. A separate resolution asks Amazon’s board to commission an independent study of the technology on the potential threats to civil liberties that it poses. Hate speech: Investors want Amazon to issue a report on its efforts to address products in its marketplace that promote hate speech and violence. Independent board chair: Shareholders asked the board to appoint an independent chair to replace Jeff Bezos, who serves as chair and CEO. “We believe the combination of these two roles in a single person weakens a corporation’s governance, which can harm shareholder value,” the resolution says. Sexual harassment: This resolution asked Amazon management to review the company’s sexual harassment policies to assess whether new standards should be implemented. Climate change: Shareholders asked Amazon to prepare a report describing its plan to reduce fossil fuel dependence and prepare for disruptions caused by the climate crisis. The resolution has garnered support from more than 7,600 Amazon employees. Board diversity: Under this resolution, Amazon’s board of directors would disclose their own “skills, ideological perspectives, and experience” and describe the minimum requirements for new board nominees. The goal is to ensure the board represents a diverse set of ideas and backgrounds. Pay equity: Shareholders want Amazon to report on the company’s global median gender pay gap. “A report adequate for investors to assess company strategy and performance would include the percentage global median pay gap between male and female employees across race and ethnicity, including base, bonus and equity compensation,” the resolution says. Executive compensation: This proposal asked the board to study whether it would be feasible to use environmental and social responsibility metrics when determining compensation for senior executives. Vote-counting: Shareholders asked Amazon’s board of directors to change corporate governance rules so that all resolutions are decided by a simple majority vote. About 50 members of the group Amazon Employees for Climate Justice attended the event, representing the staffers who signed a letter for climate policy. Employees put forth a proposal at the meeting requesting the Amazon’s board of directors to accept it and take action. But the board advised shareholders to vote against it, and as a result the proposal was not passed by the shareholders. Inside the meeting, a group of Amazon employees asked the company to take action on climate change, they all stood in white shirts in support of the resolution. https://twitter.com/AMZNforClimate/status/1131253997933719552 After the proposal failed to pass, employees attempted to address Amazon CEO Jeff Bezos directly during the Q&A session. Emily Cunningham, an Amazon UX designer and an organizer of the climate change initiative group, asked Bezos to come on stage to hear the proposal before making her case. Bezos chose to not appear at that point instead David Zapolsky, Amazon’s General Counsel responded to Emily. “Without bold, rapid action, we will lose our only chance to avoid catastrophic warming. There’s no issue more important to our customers, to our world, than the climate crisis, and we are falling fall short,” Emily said in her speech. She added, “Our home, Planet Earth, not distant far off places in space, desperately needs bold leadership. We have the talent, the passion, the imagination. We have the scale, speed and resources. Jeff, all we need is your leadership.” https://twitter.com/emahlee/status/1131286074393677824 “Jeff remained off-stage, ignored the employees and would not speak to them,” the group said in a statement after the event. “Jeff’s inaction and lack of meaningful response underscore his dismissal of the climate crisis and spoke volumes about how Amazon’s board continues to de-prioritize addressing Amazon’s role in the climate emergency.” https://twitter.com/AMZNforClimate/status/1131312970594521088 “Amazon has the scale and resources to spark the world’s imagination and lead the way on addressing the climate crisis,” said Jamie Kowalski, a software engineer who co-filed the resolution and attended the shareholder meeting. “What we’re missing is leadership from the very top of the company.” The climate proposal requested a report outlining how Amazon “is planning for disruptions posed by climate change” and “reducing company-wide dependence on fossil fuels”, citing Amazon’s coal-powered data centers and the amount of gasoline burnt for package deliveries. At a press conference following the shareholder meeting, the employees suggested Amazon should put forth a timeline for reaching a zero emission goal. The proposal also put forth data of other tech giants who had released reports on their contributions to climate change and have committed to addressing concerns. For example, Microsoft since 2012 has pledged to decrease its operational carbon emissions 75% by 2030. Google has been carbon neutral since 2007. In its response to the proposal, Amazon’s board noted it has committed to reaching a net zero carbon footprint on 50% of shipments by 2030. Amazon also has a plan to power its global infrastructure, including Amazon Web Services (AWS), with sustainable energy. Other cloud providers including Google and Azure offset energy usage for hosting to reach a zero carbon footprint while AWS does not. The board said it agreed that “planning for potential disruptions posed by climate change and reducing company-wide dependence on fossil fuels are important” but defended the stance by saying Amazon was already doing this and suggested shareholders vote against the proposal. Later, during the press conference of the meeting, one of the employees asked Bezos directly if he would support initiatives to address climate change. Bezos said in a response, “That’s a very important issue. It’s hard to find an issue that is more important than climate change. … It’s also as everyone knows, a very difficult problem.” He added: “Both e-commerce and cloud computing are inherently more efficient than their alternatives. So we’re doing a lot even intrinsically. But that’s not what I’m talking about in terms of the initiatives we’re taking.” He cited wind, solar and other projects. “There are a lot of initiatives here underway, and we’re not done, we’ll think of more, we’re very inventive,” he concluded. The employee group said in the press conference that the board’s stance on the proposal made it difficult to pass. And they further insisted they would continue to pressure Amazon. “Because the board still does not understand the severity of the climate crisis, we will file this resolution again next year,” said Weston Fribley, another software engineer who co-filed the resolution. “We will announce other actions in the coming months. We – Amazon’s employees – have the talent and experience to remake entire industries with incredible speed. This is work we want to do.” Amazon S3 is retiring support for path-style API requests; sparks censorship fears Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU
Read more
  • 0
  • 0
  • 15678

article-image-apple-proposes-a-privacy-focused-ad-click-attribution-model-for-counting-conversions-without-tracking-users
Bhagyashree R
23 May 2019
5 min read
Save for later

Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users

Bhagyashree R
23 May 2019
5 min read
Yesterday, Apple announced a new ad attribution model, which aims to hit the right balance between online user privacy and enabling advertisers to measure the effectiveness of their ad campaigns. This model, named Privacy Preserving Ad Click Attribution, is implemented in WebKit and is offered as an experimental feature in Safari Technology Preview 82+. Ad attribution model and its privacy concerns Online advertising is one of the most effective media for businesses to expand their reach and find new customers. And, ad click attribution model allows you to analyze which of your many advertising campaigns or marketing channels are leading to actual conversions. Generally, ad attribution is done through cookies and something called “tracking pixels”. Cookies are small data files stored by your browser to remember stateful information, for instance, items added in the shopping cart in an online store. A tracking pixel is basically a piece of HTML code which is loaded when a user visits a website or opens an email. If proper privacy protections are not employed, websites can use this data for user profiling. What is worse is that this data can also be sent to third parties like data brokers, affiliate networks, and advertising networks. This collection of browsing data across multiple websites is what is referred to as cross-site tracking. How Apple’s ad attribution aims to help Apple’s ad attribution model is built directly into the browser and runs on-device. This will ensure that the browser vendor will not be able to see what advertisements are being clicked or what purchases are being made. The ‘Privacy Preserving Ad Click Attribution model’ works in three steps: Storing ad clicks According to Apple's alternate Privacy Preserving Ad Click Attribution, the page hosting the ad will be responsible for storing the ad clicks. It will do this via two optional attributes: ‘adDestination’ and ‘adCampaignID’. The ‘adDestination’ attribute is the domain the ad click is navigating the user to, and ‘adCampaignID’ is the identifier of the ad campaign. Neither the browser vendor nor the website will be allowed to read the stored ad click data or detect that it exists. This data will be stored for a limited time and in the case of WebKit, it is 7 days. Matching the conversions against stored ad clicks The second step of matching the conversions against stored ad clicks will allow advertisers to understand which of their ad campaigns are the most effective ones. Conversion is basically getting the user to perform the desired action according to your advertisement, for instance, a customer adding an item to the shopping cart or signing up for a new service. In this model, tracking pixels are used as a way to determine what all actions are taken by the user benefitting the business. Data like the location of the user, time of day, the value of the conversion, or some other relevant data are passed to the browser through different parameters. Apple ensures that no sensitive data like names, addresses, or other are stored. Sending out ad click attribution data In the last step, the browser reports to the website or marketer the existence of the conversion. After the conversion is matched to an ad, the browser will set a timer at random between 24 to 48 hours to send a stateless POST request to the advertiser. And, within this time it will pass the ad campaign and other parameters to the advertiser. Apple is previewing this model in Safari Technology Preview 82+. It is also proposing this model as a standard through the W3C Web Platform Incubator Community Group (WICG). The model has received mixed reaction from users. Some think that this model can help in reducing online tracking. A Reddit user supporting the initiative said, “Ad companies are not having trouble attributing campaigns. The problem is that small, uncoordinated "privacy" features cause Ad Tech companies to become far more aggressive in how they track users. It's not the companies that lose here, it's you. A standardized, privacy-centric method for companies to accomplish attribution will help end the arms race and move back to a more consumer-friendly model. Small edges are worth a fortune in Ads. This is like the war on drugs. Clamping down and assuming ad companies will walk away is way too optimistic. Instead, they will move deeper into the shadows at whatever the cost.” Others think that it is not a browser’s responsibility to help online advertisement and should be on the users’ side. “I certainly have never wanted my browser to report ad click attribution,” another Redditor remarked. Read the full announcement by Apple for more details. Apple Pay will soon support NFC tags to trigger payments U.S. Supreme Court ruled 5-4 against Apple on its App Store monopoly case Apple plans to make notarization a default requirement in all future macOS updates
Read more
  • 0
  • 0
  • 41225
article-image-kubecon-cloudnativecon-eu-2019-highlights-microsofts-service-mesh-interface-enhancements-to-gke-virtual-kubelet-1-0-and-much-more
Savia Lobo
22 May 2019
7 min read
Save for later

KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more!

Savia Lobo
22 May 2019
7 min read
The KubeCon+CloudNativeCon 2019 is live (May 21- May 23) at the Fira Gran Via exhibition center in Barcelona, Spain. This conference has a huge assemble of announcements for topics including Kubernetes, DevOps, and cloud-native application. There were many exciting announcements from Microsoft, Google, The Cloud Native Computing Foundation, and more!! Let’s have a brief overview of each of these announcements. Microsoft Kubernetes Announcements: Service Mesh Interface(SMI), Visual Studio Code Kubernetes extension 1.0, Virtual Kubelet 1.0, and Helm 3 alpha Service Mesh Interface(SMI) Microsoft launched the Service Mesh Interface (SMI) specification, the company’s new community project for collaboration around Service Mesh infrastructure. SMI defines a set of common, portable APIs that provide developers with interoperability across different service mesh technologies including Istio, Linkerd, and Consul Connect. The Service Mesh Interface provides: A standard interface for meshes on Kubernetes A basic feature set for the most common mesh use cases Flexibility to support new mesh capabilities over time Space for the ecosystem to innovate with mesh technology To know more about the Service Mesh Interface, head over to Microsoft’s official blog. Visual Studio Code Kubernetes extension 1.0, Virtual Kubelet 1.0, and first alpha of Helm 3 Microsoft released its Visual Studio Code’s open source Kubernetes extension version 1.0. The extension brings native Kubernetes integration to Visual Studio Code, and is fully supported for production management of Kubernetes clusters. Microsoft has also added an extensibility API that makes it possible for anyone to build their own integration experiences on top of Microsoft’s baseline Kubernetes integration. Microsoft also announced Virtual Kubelet 1.0. Brendan Burns, Kubernetes cofounder and Microsoft distinguished engineer said, “The Virtual Kubelet represents a unique integration of Kubernetes and serverless container technologies, like Azure Container Instances. We developed it and in the context of the Cloud Native Computing Foundation, where it’s a sandbox project.” He further added, “With 1.0, we’re saying ‘It’s ready.’ We think we’ve done all the work that we need in order for people to take production level dependencies on this project.” Microsoft also released the first alpha of Helm 3. Helm is the defacto standard for packaging and deploying Kubernetes applications. Helm 3 is simpler, supports all the modern security, identity, and authorization features of today’s Kubernetes. Helm 3 allows users to revisit and simplify Helm’s architecture, due to the growing maturity of Kubernetes identity and security features, like role-based access control (RBAC), and advanced features, such as custom resource definitions (CRDs). Know more about Helm 3 in detail on Microsoft’s official blog post. Google announces enhancements to Google Kubernetes Engine; Stackdriver Kubernetes Engine Monitoring ‘generally available’ On the first day of the KubeCon+CloudNative Con 2019, yesterday, Google announced the three release channels for its Google Kubernetes Engine (GKE), Rapid, Regular and Stable. Google, in its official blog post states, “Each channel offers different version maturity and freshness, allowing developers to subscribe their cluster to a stream of updates that match risk tolerance and business requirements.” This new feature will be launched into alpha with the first release in the Rapid channel, which will give developers early access to the latest versions of Kubernetes. Google also announced the general availability of Stackdriver Kubernetes Engine Monitoring, a tool that gives users a GKE observability (metrics, logs, events, and metadata) all in one place, to help provide faster time-to-resolution for issues, no matter the scale. To know more about the three release channels and the Stackdriver Kubernetes Engine Monitoring in detail, head over to Google’s official blog post. Cloud Native Foundation announcements: Announcing Harbor 1.8,  launches a new online course ‘Cloud Native Logging with Fluentd’, Intuit Inc. wins the CNCF End User Award, and Kong Inc. is now a Gold Member Harbor 1.8 The VMWare team released Harbor 1.8, yesterday, with new features and improvements, including enhanced automation integration, security, monitoring, and cross-registry replication support. Harbor is an open source trusted cloud native registry project that stores, signs, and scans content. Harbor 1.8 also brings various  other capabilities for both administrators and end users: Health check API, which shows detailed status and health of all Harbor components. Harbor extends and builds on top of the open source Docker Registry to facilitate registry operations like the pushing and pulling of images. In this release, we upgraded our Docker Registry to version 2.7.1 Support for defining cron-based scheduled tasks in the Harbor UI. Administrators can now use cron strings to define the schedule of a job. Scan, garbage collection, and replication jobs are all supported. API explorer integration. End users can now explore and trigger Harbor’s API via the Swagger UI nested inside Harbor’s UI. Enhancement of the Job Service engine to include internal webhook events, additional APIs for job management, and numerous bug fixes to improve the stability of the service. To know more about this release, read Harbor 1.8 official blogpost. A new online course on ‘Cloud Native Logging with Fluentd’ The Cloud Native Computing Foundation and The Linux Foundation have together designed a new, self-paced and hands-on course Cloud Native Logging with Fluentd. This course will provide users with the necessary skills to deploy Fluentd in a wide range of production settings. Eduardo Silva, Principal Engineer at Arm Treasure Data, said, “This course will explore the full range of Fluentd features, from installing Fluentd and running it in a container, to using it as a simple log forwarder or a sophisticated log aggregator and processor.” “As we see the Fluentd project growing into a full ecosystem of third party integrations and components, we are thrilled that this course will be offered so more people can realize the benefits it provides”, he further added. To know more about this course and its benefits in detail, visit the official blogpost. Intuit Inc. won the CNCF End User Award At the conference, yesterday, CNCF announced that Intuit Inc. has won the CNCF End User Award in recognition of its contributions to the cloud native ecosystem. Intuit is an active user, contributor and developer of open source technologies. As a part of its journey to the public cloud, Intuit has advanced the way it leverages cloud native technologies in production, including CNCF projects like Kubernetes and OPA. To know more about this achievement by Intuit in detail, read the official blog post. Kong Inc. is now a Gold Member of the CNCF The CNCF announced that Kong Inc., which provides open source API and service lifecycle management tool has upgraded its membership to Gold. The company backs the Kong project, a cloud native, fast, scalable and distributed microservice abstraction layer. Kong Kong is focused on building a service control platform that acts as the nervous system for an organization’s modern software architectures by intelligently brokering information across all services. Dan Kohn, Executive Director of the Cloud Native Computing Foundation, said, “With their focus on open source and cloud native, Kong is a strong member of the open source community and their membership provides resources for activities like bug bounties and security audits that help our community continue to thrive.” Head over to CNCF’s official announcement post. More announcements can be expected from this conference, to stay updated visit KubeCon+CloudNativeCon 2019 official website. F8 Developer Conference Highlights: Redesigned FB5 app, Messenger update, new Oculus Quest and Rift S, Instagram shops, and more RSA Conference 2019 Highlights: Top 5 cybersecurity products announced NSA releases Ghidra, a free software reverse engineering (SRE) framework, at the RSA security conference
Read more
  • 0
  • 0
  • 15142

article-image-5-developers-explain-why-they-use-visual-studio-code-sponsored-by-microsoft
Richard Gall
22 May 2019
7 min read
Save for later

5 developers explain why they use Visual Studio Code [Sponsored by Microsoft]

Richard Gall
22 May 2019
7 min read
Visual Studio Code has quickly become one of the most popular text editors on the planet. While debate will continue to rage about the relative merits of every text editor, it’s nevertheless true that Visual Studio Code is unique in that it is incredibly customizable: it can be as lightweight as a text editor or as feature-rich as an IDE. This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free from Microsoft here. Try Visual Studio Code yourself. Learn more here. This means the range of developers using Visual Studio Code are incredibly diverse. Each one faces a unique set of challenges alongside their personal preferences. I spoke to a few of them about why they use Visual Studio Code and how they make it work for them. “Visual Studio Code is streamlined and flexible” Ben Sibley is the Founder of Complete Themes. He likes Visual Studio Code because it is relatively lightweight while also offering considerable flexibility. “I love how streamlined and flexible Visual Studio Code is. Personally, I don’t need a ton of functionality from my IDE, so I appreciate how simple the default configuration is. There's a very concise set of features built-in like the Git integration. “I was using PHPStorm previously and while it was really feature-rich, it was also overwhelming at times. VSC is faster, lighter, and with the extension market you can pick and choose which additional tools you need. And it’s a popular enough editor that you can usually find a reliable and well-reviewed extension.” Read next: How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsored by Microsoft] “Visual Studio Code is the best in terms of extension ecosystem, language support and configuration” Libby Horacek is a developer at Position Development. She has worked with several different code editors but struggled to find one that allowed her to effectively move between languages. For Libby, Visual Studio Code offered the right level of flexibility. She also explained how the team at Position Development have used VSC’s Live Share feature which allows developers to directly share and collaborate on code inside their editor. “I currently use Visual Studio Code. I’ve tried a LOT of different editors. I’m a polyglot developer, so I need an editor that isn’t just for one language. RubyMine is great for Ruby, and PyCharm is good for Python, but I don’t want to switch editors every time I switch languages (sometimes multiple times a day). My main constraint is Haskell language support — there are plugins for most IDEs now, but some are better than others. “For a long time I used Emacs just because I was able to steal a great configuration setup for it from a coworker, but a few months back it stopped working due to updates and I didn’t want to acquire the Emacs expertise to fix it. So I tried IntelliJ, Visual Studio, Atom, Sublime Text, even Vim… but in the end I liked Visual Studio the best in terms of extension ecosystem, language support, and ease of use and configuration. “My team also uses Visual Studio’s Live Share for pairing. I haven’t tried it personally but it looks like a great option for remote pairing. The only thing my coworkers have cautioned is that they encountered a bug with the “undo” functionality that wiped out most of a file they were working on. Maybe that bug has been fixed by now, but as always, commit early and commit often!” “As a JavaScript dev shop, we love that VSC is written in JavaScript” Cody Swann is the CEO of Gunner Technology, a software development company that builds using JavaScript on AWS for both the public and private sector. “All our developers here [at Gunner Technology] use VSC. “We switched from Sublime about two years ago because Sublime started to feel slow and neglected. “Before that, we used TextMate and abandoned that for the same reasons. “As a JavaScript dev shop, we love that VSC is written in JavaScript. It makes it easier for us to write in-house extensions and such. “Additionally, we love that Microsoft releases monthly updates and keeps improving performance.” Read next: Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity “The Visual Studio Code team pay close attention to the problems developers face” Ajeet Dhaliwal is a software developer at Tesults. He explains he has used several different IDEs and editors but came to Visual Studio Code after spending some time using Node.js and React on Brackets. “I have used Visual Studio Code almost exclusively for the last couple of years. “In years prior to making this switch, the nature of the development work that I did meant that I was broadly limited to using specific IDEs such as Visual Studio and Xcode. Then in 2014 I stated to get into Node.js and was looking for a code editor that would be more suitable. I tried out a few and ultimately settled on Brackets. “I used Brackets for a while but wasn’t always happy with it. The most annoying issue was the way text was rendered on my Mac. “Over time I started doing React work too and every time I revisited VSC the improvements were impressive, it seemed to me that the developers were closely paying attention to the problems developers face, they were creating features I had never even thought I would need and the extensions added highly useful features for Node.js and React dev work. The font rendering was not an issue either so it became an inevitable switch.” “I have to context switch regularly - I expect my brain to be the slowest element, not the IDE” Kyle Balnave is Senior Developer and Squad Manager at High Speed Training. Despite working with numerous editors and IDEs, he likes Visual Studio Code because it allows him to move between different contexts incredibly quickly. Put simply, it allows him to work faster than other IDEs do. "I've used several different editors over the years. They generally fall under two categories: Monolithic (I can do anything you'll ever want to do out of the box). Modular (I do the basics but allow extensions to be added to do most the rest). “The former are IDEs like Netbeans, IntelliJ and Visual Studio. In my experience they are slow to load and need a more powerful development machine to keep responsive. They have a huge range of functionality, but in everyday development I just need it to be an intelligent code editor. “The latter are IDEs like Eclipse, Visual Studio Code, Atom. They load quickly, respond fast and have a wide range of extensions that allow me to develop what I need. They sometimes fall short in their functionality, but I generally find this to be infrequent. “Why do I use VSCode? Because it doesn't slow me down when I code. I have to context switch regularly so I expect my own brain to be the slowest element, not the IDE. Learn how to develop with Node.js on Azure by downloading Learning Node.js with Azure for free, courtesy of Microsoft.
Read more
  • 0
  • 0
  • 44251

article-image-how-change-org-uses-flow-elixirs-library-to-build-concurrent-data-pipelines-that-can-handle-a-trillion-messages
Sugandha Lahoti
22 May 2019
7 min read
Save for later

How Change.org uses Flow, Elixir’s library to build concurrent data pipelines that can handle a trillion messages

Sugandha Lahoti
22 May 2019
7 min read
Last month, at the ElixirConf EU 2019, John Mertens, Principal Engineer at Change.org conducted a session - Lessons From Our First Trillion Messages with Flow for developers interested in using Elixir for building data pipelines in the real-world system. For many Elixir converts, the attraction of Elixir is rooted in the promise of the BEAM concurrency model. The Flow library has made it easy to build concurrent data pipelines utilizing the BEAM (Originally BEAM was short for Bogdan's Erlang Abstract Machine, named after Bogumil "Bogdan" Hausman, who wrote the original version, but the name may also be referred to as Björn's Erlang Abstract Machine, after Björn Gustavsson, who wrote and maintains the current version). The problem is, that while the docs are great, there are not many resources on running Flow-based systems in production. In his talk, John shares some lessons his team learned from processing their first trillion messages through Flow. Using Flow at Change.org Change.org is a platform for social change where people from all over the world come to start movements on all topics and of all sizes. Technologically, change.org is primarily built in Ruby and JavaScript but they started using Elixir in early 2018 to build a high volume mission-critical data processing pipeline. They used Elixir for building this new system because of its library, Flow. Flow is a library for computational parallel flows in Elixir. It is built on top of GenStage.  GenStage is “specification and computational flow for Elixir”, meaning it provides a way for developers to define a pipeline of work to be carried out by independent steps (or stages) in separate processes. Flow allows developers to express computations on collections, similar to the Enum and Stream modules, although computations will be executed in parallel using multiple GenStages. At Change.org, the developers built some proofs of concept and a few different languages and put them against each other with the two main criteria being performance and developer happiness. Elixir came out as the clear winner. Whenever on change.org an event gets added to a queue, their elixir system pulls these messages off the queue, then prep and transforms them to some business logic and generate some side effects. Next, depending on a few parameters, the messages are either passed on to another system, discarded or retried. So far things have gone smoothly for them, which brought John to discuss lessons from processing their first trillion messages with Flow. Lesson 1: Let Flow do the work Flow and GenStage both are great libraries which provide a few game-changing features by default. The first being parallelism. Parallelism is beneficial for large-scale data processing pipelines and Elixir flows abstractions make utilizing Parallelism easier. It is as easy as writing code that looks essentially like standard Elixir pipeline but that utilizes all of your CPU cores. The second feature of Flow is Backpressure. GenStage specifies how Elixir processes should communicate with back-pressure. Simply put, Backpressure is when your system asks for more data to process instead of data being pushed on to it. With Flow your data processing is in charge of requesting more events. This means that if your process is dependent on some other service and that service becomes slow your whole flow just slows down accordingly and no service gets overloaded with requests, making the whole system stay up. Lesson 2: Organize your Flow The next lesson is on how to set up your code to take advantage of Flow. These organizational tactics help Change.org keep their Flow system manageable in practice. Keep the Flow Simple The golden rule according to John, is to keep your flow simple. Start simple and then increase the complexity depending on the constraints of your system. He discusses a quote from the Flow Docs, which states that: [box type="shadow" align="" class="" width=""]If you can solve a problem without using partition at all, that is preferred. Those are typically called embarrassingly parallel problems.[/box] If you can shape your problem into an embarrassingly parallel problem, he says, flow can really shine. Know your code and your system He also advises that developers should know their code and understand their systems. He then proceeds to give an example of how SQS is used in Flow. Amazon SQS (Simple Queue System) is a message-queuing system (also used at Change. org) that allows you to write distributed applications by exposing a message pipeline that can be processed in the background by workers. It’s two main features are the visibility window and acknowledgments. In acknowledgments, when you pull a message off a queue you have a set amount of time to acknowledge that you've received and processed that message and that amount of time is called the visibility window and that's configurable. If you don't acknowledge the message within the visibility window, it goes back into the queue. If a message is pulled and not acknowledged, a configured number of times then it is either discarded or sent to a dead letter queue. He then proceeds to use an example of a Flow they use in production. Maintain a consistent data structure You should also use a consistent data structure or a token throughout the data pipeline. The data structure most essential to their flow at Change.org is message struct -  %Message{}. When a message comes in from SQS, they create a message struct based on it. The consistency of having the same data structure at every step and the flow is how they can keep their system simple. He then explains an example code on how they can handle different types of data while keeping the flow simple. Isolate the side effects The next organizational tactic that helps Change.org employ to keep their Flow system manageable in practice is to isolate the side effects. Side effects are mutations; if something goes wrong in a mutation you need to be able to roll it back. In the spirit of keeping the flow simple, at Change.org, they batch all the side-effects together and put them at the ends so that a nothing gets lost if they need to roll it back. However, there are certain cases where you can’t put all side effects together and need a different strategy. These cases can be handled using Flow Sagas. Sagas pattern is a way to handle long live transactions providing rollback instructions for each step along the way so in case it goes bad it can just run that function. There is also an elixir implementation of sagas called Sage. Lesson 3: Tune the flow How you optimize your Flow is dependent upon the shape of your problem. This means tailoring the Flow to your own use case to squeeze all the throughput. However, there are three things which you can do to shape your Flow. This includes measuring flow performance, what are the things that we can actually do to tune it and then how can we help outside of the Flow. Apart from the three main lessons on data processing through Flow, John also mentions a few others, namely Graceful Producer Shutdowns Flow-level integration tests Complex batching Finally, John gave a glimpse of Broadway from change.org’s codebase. Broadway allows developers to build concurrent and multi-stage data ingestion and data processing pipelines with Elixir. It takes the burden of defining concurrent GenStage topologies and provides a simple configuration API that automatically defines concurrent producers, concurrent processing, batch handling, and more, leading to both time and cost efficient ingestion and processing of data. Some of its features include back-pressure automatic acknowledgments at the end of the pipeline, batching, automatic restarts in case of failures, graceful shutdown, built-in testing, and partitioning. José Valim’s keynote at the ElixirConf2019 also talked about streamlining data processing pipelines using Broadway. You can watch the full video of John Mertens’ talk here. John is the principal scientist at Change.org using Elixir to empower social action in his organization. Why Ruby developers like Elixir Introducing Mint, a new HTTP client for Elixir Developer community mourns the loss of Joe Armstrong, co-creator of Erlang
Read more
  • 0
  • 0
  • 12439
article-image-getting-started-with-designing-restful-apis
Vincy Davis
21 May 2019
9 min read
Save for later

Getting started with designing RESTful APIs

Vincy Davis
21 May 2019
9 min read
The application programmable interface (API) is one of the most promising software paradigms to address anything, anytime, anywhere, and any device, which is the one substantial need of the digital world at the moment. This article discusses how APIs and API designs help to address those challenges and bridge the gaps. It discusses a few essential API design guidelines, such as consistency, standardization, re-usability, and accessibility through REST interfaces, which could equip API designers with better thought processes for their API modeling. This article is an excerpt taken from the book, 'Hands-On RESTful API Design Patterns and Best Practices' written by Harihara Subramanian and Pethura Raj. In this article, you will understand the various design rules of RESTful APIs including the use of Uniform Resource Identifiers, URI authority, Resource modelling and many more. Goals of RESTful API design APIs are straightforward, unambiguous, easy to consume, well-structured, and most importantly accessible with well-known and standardized HTTP methods. They are one of the best possible solutions for resolving many digitization challenges out of the box. The following are the basic API design goals: Affordance Loosely coupled Leverage existing web architecture RESTful API design rules The best practices and design principles are guidelines that API designers try to incorporate in their API design. So for making the API design RESTFUL, certain rules are followed such as  the following: Use of Uniform Resource Identifiers URI authority Resource modelling Resource archetypes URI path URI query Metadata design rules (HTTP headers and returning error codes) and representations It will be easier to design and deliver the finest RESTful APIs, if we understand these design rules. Uniform Resource Identifiers REST APIs should use Uniform Resource Identifiers (URIs) to represent their resources. Their indications should be clear and straightforward so that they communicate the APIs resources crisp and clearly: A sample of a simple to understand URI is https://xx.yy.zz/sevenwonders/tajmahal/india/agra, as you may observe that the emphasized texts clearly indicates the intention or representation A harder to understand URI is https://xx.yy.zz/books/36048/9780385490627; in this sample, the text after books is very hard for anyone to understand So having simple, understandable representation in the URI is critical in RESTful API design. URI formats The syntax of the generic URI is scheme "://" authority "/" path [ "?" query ] [ "#" fragment ] and following are the rules for API designs: Use forward slash (/) separator Don't use a trailing forward slash Use hyphens (-) Avoid underscores (_) Prefer all lowercase letters in a URI path Do not include file extensions REST API URI authority As we've seen different rules for URIs in general, let's look at the authority (scheme "://" authority "/" path [ "?" query ] [ "#" fragment ]) portion of the REST API URI: Use consistent sub-domain names: As you see in http://api.baseball.restfulapi.org, the API domain should have api as part of its sub-domainConsistent sub-domain names for an API include the following: The top-level domain and the first sub-domain names indicate the service owner and an example could be baseball.restfulapi.org Consistent sub-domain names for a developer portal  include the following: As we saw in the API playgrounds section, the API providers should have exposed sites for APP developers to test their APIs called a developer portal. So, by convention, the developer portal's sub-domain should have developer in it. An example of a sub-domain with the developer for a developer portal would be http://developer.baseball.restfulapi.org. Resource modelling Resource modeling is one of the primary aspects for API designers as it helps to establish the APIs fundamental concepts. In general, the URI path always convey REST resources, and each part of the URI is separated by a forward slash (/) to indicate a unique resource within it model's hierarchy. Each resource separated by a forward slash indicates an addressable resource, as follows: https://api-test.lufthansa.com/v1/profiles/customers https://api-test.lufthansa.com/v1/profiles https://api-test.lufthansa.com Customers, profiles, and APIs are all unique resources in the preceding individual URI models. So, resource modelling is a crucial design aspect before designing URI paths. Resource archetypes Each service provided by the API is an archetype, and they indicate the structures and behaviors of REST API designs. Resource modelling should start with a few fundamental resource archetypes, and usually, the REST API is composed of four unique archetypes, as follows: Document Collection Stores Controller URI path This section discusses rules relating to the design of meaningful URI paths (scheme "://" authority "/" path [ "?" query ] [ "#" fragment ]) portion of the REST API URIs. The following are the rules about URI paths: Use singular nouns for document names, for example, https://api-test.lufthansa.com/v1/profiles/customers/memberstatus. Use plural nouns for collections and stores: Collections: https://api-test.lufthansa.com/v1/profiles/customers Stores: https://api-test.lufthansa.com/v1/profiles/customers/memberstatus/prefernces As controller names represent an action, use a verb or verb phrase for controller resources. An example would be https://api-test.lufthansa.com/v1/profiles/customers/memberstatus/reset Do not use CRUD function names in URIs: Correct URI example: DELETE /users/1234 Incorrect URIs: DELETE /user-delete /1234, and POST /users/1234/delete URI query These are the rules relating to the design of the query (scheme "://" authority "/" path [ "?" query ] [ "#" fragment ]) portion of the REST API URIs. The query component of the URI also represents the unique identification of the resource, and following are the rules about URI queries: Use the query to filter collections or stores: An example of the limit in the query: https://api.lufthansa.com/v1/operations/flightstatus/arrivals/ZRH/2018-05-21T06:30?limit=40 Use the query to paginate collection or store results: An example with the offset in the query: https://api.lufthansa.com/v1/operations/flightstatus/arrivals/ZRH/2018-05-21T06:30?limit=40&offset=10 HTTP interactions A REST API doesn't suggest any special transport layer mechanisms, and all it needs is basic Hyper Text Transfer Protocol and its methods to represent its resources over the web. We will touch upon how REST should utilize those basic HTTP methods in the upcoming sections. Request methods The client specifies the intended interaction with well-defined semantic HTTP methods, such as GET, POST, PUT, DELETE, PATCH, HEAD, and OPTIONS. The following are the rules that an API designer should take into account when planning their design: Don't tunnel to other requests with the GET and POST methods Use the GET method to retrieve a representation of a resource Use the HEAD method to retrieve response headers Use the PUT method to update and insert a stored resource Use the PUT method to update mutable resources Use the POST method to create a new resource in a collection Use the POST method for controller's execution Use the DELETE method to remove a resource from its parent Use the OPTIONS method to retrieve metadata Response status codes HTTP specification defines standard status codes, and REST API can use the same status codes to deliver the results of a client request. The status code categories and a few associated REST API rules are as follows so that the APIs can apply those rules according to the process status: 1xx: Informational: This provides protocol-level information 2xx: Success: Client requests are accepted (successfully), as in the following examples: 200: OK 201: Created 202: Accepted 204: No content 3xx: Redirection: Client requests are redirected by the server to the different endpoints to fulfill the client request: 301: Moved Permanently 302: Found 303: See other 304: Not modified 307: Temporarily redirect 4xx: Client error: Errors at client side: 400: Bad request 401: Unauthorized 403: Forbidden 404: Not found 405: Method not allowed 406: Not acceptable 409: Conflict 412: Precondition failed 415: Unsupported media type 5xx: Server error: These relate to errors at server side: 500: Internal server error Metadata design It looks at the rules for metadata designs, including HTTP headers and media types HTTP headers HTTP specifications have a set of standard headers, through which a client can get information about a requested resource, and carry the messages that indicate its representations and may serve as directives to control intermediary caches. The following points suggest a few sets of rules conforming to the HTTP standard headers: Should use content-type Should use content-length Should use last-modified in responses Should use ETag in responses Stores must support conditional PUT request Should use the location to specify the URI of newly created resources (through PUT) Should leverage HTTP cache headers Should use expiration headers with 200 ("OK") responses May use expiration caching headers with 3xx and 4xx responses Mustn't use custom HTTP headers Media types and media type design rules Media types help to identify the form of the data in a request or response message body, and the content-type header value represents a media type also known as the Multipurpose Internet Mail Extensions (MIME) type. Media type design influences many aspects of a REST API design, including hypermedia, opaque URIs, and different and descriptive media types so that app developers or clients can rely on the self-descriptive features of the REST API. The following are the two rules of media type design: Uses application-specific media types Supports media type negotiations in case of multiple representations Thus we can say that: Support media type selection using a query parameter: To support clients with simple links and debugging, REST APIs should support media type selection through a query parameter named accept, with a value format that mirrors that of the accept HTTP request header An example is REST APIs should prefer a more precise and generic approach as following media type, using the GET https://swapi.co/api/planets/1/?format=json query parameter identification over the other alternatives Summary We have briefly discussed the goals of RESTful API design and how API designers need to follow design principles and rules so that you can create better RESTful APIs. To know more about the rules for most common resource formats, such as JSON and hypermedia, and error types, in brief, client concerns, head over to the book, 'Hands-On RESTful API Design Patterns and Best Practices'. Svelte 3 releases with reactivity through language instead of an API Get to know ASP.NET Core Web API [Tutorial] Implement an API Design-first approach for building APIs [Tutorial]
Read more
  • 0
  • 0
  • 6507

article-image-approx-250-public-network-users-affected-during-stack-overflows-security-attack
Vincy Davis
20 May 2019
4 min read
Save for later

Approx. 250 public network users affected during Stack Overflow's security attack

Vincy Davis
20 May 2019
4 min read
In a security update released on May 16, StackOverflow confirmed that “some level of their production access was gained on May 11”. In a recent “Update to Security Incident” post, Stack Overflow provides further details of the security attack including the actual date and duration of the attack, how the attack took place, and the company’s response to this incident. According to the update, the first intrusion happened on May 5 when a build deployed for the development tier for stackoverflow.com contained a bug. This allowed the attacker to log in to their development tier as well as escalate its access on the production version of stackoverflow.com. From May 5 onwards, the intruder took time to explore the website until May 11. Post which the intruder made changes in the Stack Overflow system to obtain a privileged access on production. This change was identified by the Stack Overflow team and led to immediately revoking their network-wide access and also initiating an investigation on the intrusion. As part of their security procedure to protect sensitive customer data, Stack Overflow maintains separate infrastructure and network for their clients of Teams, Business, and Enterprise products. They have not found any evidence to these systems or customer data being accessed. The Advertising and Talent businesses of Stack Overflow were also not impacted. However, the team has identified some privileged web request that the attacker had made, which might have returned an IP address, names, or emails of approximately 250 public network users of Stack Exchange. These affected users will be notified by Stack Overflow. Steps taken by Stack Overflow in response to the attack Terminated the unauthorized access to the system. Conducted an extensive and detailed audit of all logs and databases that they maintain, which allowed them to trace the steps and actions that were taken. Remediated the original issues that allowed unauthorized access and escalation. Issued a public statement proactively. Engaged third-party forensics and incident response firm to assist with both remediation and learnings of Stack Overflow. Have taken precautionary measures such as cycling secrets, resetting company passwords, and evaluating systems and security levels. Stack Overflow has again promised to provide more public information after their investigation cycle concludes. Many developers are appreciating the quick confirmation, updates and the response taken by Stack Overflow in this security attack incident. https://twitter.com/PeterZaitsev/status/1129542169696657408 A user on Hacker news comments, “I think this is one of the best sets of responses to a security incident I've seen: Disclose the incident ASAP, even before all facts are known. The disclosure doesn't need to have any action items, and in this case, didn't Add more details as investigation proceeds, even before it fully finishes to help clarify scope The proactive communication and transparency could have downsides (causing undue panic), but I think these posts have presented a sense that they have it mostly under control. Of course, this is only possible because they, unlike some other companies, probably do have a good security team who caught this early. I expect the next (or perhaps the 4th) post will be a fuller post-mortem from after the incident. This series of disclosures has given me more confidence in Stack Overflow than I had before!” Another user on Hacker News added, “Stack Overflow seems to be following a very responsible incident response procedure, perhaps instituted by their new VP of Engineering (the author of the OP). It is nice to see.” Read More 2019 Stack Overflow survey: A quick overview Bryan Cantrill on the changing ethical dilemmas in Software Engineering Listen to Uber engineer Yuri Shkuro discuss distributed tracing and observability [Podcast]
Read more
  • 0
  • 0
  • 36037
Modal Close icon
Modal Close icon