Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Uncategorized

28 Articles
article-image-openbsd-6-4-released
Savia Lobo
19 Oct 2018
3 min read
Save for later

OpenBSD 6.4 released

Savia Lobo
19 Oct 2018
3 min read
Yesterday, the founder of OpenBSD, Theo de Raadt announced the release of a new version of its free and open-source security-focused OS, OpenBSD 6.4. The interesting feature in the OpenBSD 6.4 is the unveil() system call, which allows applications to sandbox themselves, blocking their own access to the file system. This is especially useful for programs which operate on unknown data which may try to exploit or crash the application. OpenBSD 6.4 also includes many driver improvements, which allow OpenSSH's configuration files to use service names instead of port numbers. Also, the Clang compiler will now replace some risky ROP instructions with safe alternatives. Other features and improvements in OpenBSD 6.4 Improved hardware support The new version includes an ACPI support on OpenBSD/arm64 platforms. New acpipci(4/arm64) driver providing support for PCI host bridges based on information provided by ACPI. Added a sensor for port replicator status to acpithinkpad(4). Support for Allwinner H3 and A64 SoC in scitemp(4). New bnxt(4) driver for Broadcom NetXtreme-C/E PCI Express Ethernet adapters based on the Broadcom BCM573xx and BCM574xx chipsets. Enabled on amd64 and arm64 platforms. The radeondrm(4) driver was updated to code based on Linux 4.4.155. IEEE 802.11 wireless stack improvements The OpenBSD 6.4 has a new 'join' feature (managed with ifconfig(8)) using which the kernel manages automatic switching between different WiFi networks. Also, the ifconfig(8) scan performance has been improved for many devices. Generic network stack improvements Addition of a new eoip(4) interface for the MikroTik Ethernet over IP (EoIP) encapsulation protocol. Also, new global IPsec counters are available via netstat(1). The trunk(4) now has LACP administrative knobs for mode, timeout, system priority, port priority, and ifq priority. Security improvements OpenBSD 6.4 introduces a new RETGUARD security mechanism on amd64 and arm64. Here, one can use per-function random cookies to protect access to function return instructions, making them harder to use in ROP gadgets. It also includes an added SpectreRSB mitigation on amd64 and an added Intel L1 Terminal Fault mitigation on amd64. clang(1) includes a pass that identifies common instructions which may be useful in ROP gadgets and replaces them with safe alternatives on amd64 and i386. The Retpoline mitigation against Spectre Variant 2 has been enabled in clang(1) and in assembly files on amd64 and i386. The amd64 now uses eager-FPU switching to prevent FPU state information speculatively leaking across protection boundaries. Simultaneous MultiThreading (SMT) uses core resources in a shared and unsafe manner, it is now disabled by default. It can be enabled with the new hw.smt sysctl(2) variable. The audio recording feature is now disabled by default and can be enabled with the new kern.audio.record sysctl(2) variable. The getpwnam(3) and getpwuid(3) no longer return a pointer to static storage but a managed allocation which gets unmapped. This allows detection of access to stale entries. sshd(8) includes improved defence against user enumeration attacks. To know more about the other features in detail, head over to the OpenBSD 6.4 release log. KUnit: A new unit testing framework for Linux Kernel The kernel community attempting to make Linux more secure  
Read more
  • 0
  • 0
  • 20303

article-image-firefox-releases-v66-0-4-and-60-6-2-to-fix-the-expired-certificate-problem-that-ended-up-disabling-add-ons
Bhagyashree R
06 May 2019
3 min read
Save for later

Firefox releases v66.0.4 and 60.6.2 to fix the expired certificate problem that ended up disabling add-ons

Bhagyashree R
06 May 2019
3 min read
Last week on Friday, Firefox users were left infuriated when all their extensions were abruptly disabled. Fortunately, Mozilla has fixed this issue in their yesterday’s releases, Firefox 66.0.4 and Firefox 60.6.2. https://twitter.com/mozamo/status/1124484255159971840 This is not the first time when Firefox users have encountered such type of problems. A similar issue was reported back in 2016 and it seems that they did not take proper steps to prevent the issue from recurring. https://twitter.com/Theobromia/status/1124791924626313216 Multiple users were reporting that all add-ons were disabled on Firefox because of failed verification. Users were also unable to download any new add-ons and were shown  "Download failed. Please check your connection" error despite having a working connection. This happened because the certificate with which the add-ons were signed expired. The timestamp mentioned in the certificates were: Not Before: May 4 00:09:46 2017 GMT Not After : May 4 00:09:46 2019 GMT Mozilla did share a temporary hotfix (“hotfix-update-xpi-signing-intermediate-bug-1548973”) before releasing a product with the issue permanently fixed. https://twitter.com/mozamo/status/1124627930301255680 To apply this hotfix automatically, users need to enable Studies, a feature through which Mozilla tries out new features before they release to the general users. The Studies feature is enabled by default, but if you have previously opted out of it, you can enable it by navigating to Options | Privacy & Security | Allow Firefox to install and run studies. https://twitter.com/mozamo/status/1124731439809830912 Mozilla released Firefox 66.0.4 for desktop and Android users and Firefox 60.6.2 for ESR (Extended Support Release) users yesterday with a permanent fix to this issue. These releases repair the certificate to re-enable web extensions that were disabled because of the issue. There are still some issues that need to be resolved, which Mozilla is currently working on: A few add-ons may appear unsupported or not appear in 'about:addons'. Mozilla assures that the add-ons data will not be lost as it is stored locally and can be recovered by re-installing the add-ons. Themes will not be re-enabled and will switch back to default. If a user’s home page or search settings are customized by an add-on it will be reset to default. Users might see that Multi-Account Containers and Facebook Container are reset to their default state. Containers is a functionality that allows you to segregate your browsing activities within different profiles. As an aftereffect of this certificate issue, data that might be lost include the configuration data regarding which containers to enable or disable, container names, and icons. Many users depend on Firefox’s extensibility property to get their work done and it is obvious that this issue has left many users sour. “This is pretty bad for Firefox. I wonder how much people straight up & left for Chrome as a result of it,” a user commented on Hacker News. Read the Mozilla Add-ons Blog for more details. Mozilla’s updated policies will ban extensions with obfuscated code Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly
Read more
  • 0
  • 0
  • 17678

article-image-intel-acquires-easic-a-custom-chip-fpga-maker-for-iot-cloud-and-5g-environments
Kunal Chaudhari
23 Jul 2018
3 min read
Save for later

Intel acquires eASIC, a custom chip (FPGA) maker for IoT, cloud and 5G environments

Kunal Chaudhari
23 Jul 2018
3 min read
Last week Intel acquired eASIC, a fabless semiconductor company that makes customizable eASIC chips for use in wireless and cloud environments. The actual transaction amount for this merger was not disclosed by Intel. They believe that this acquisition is more “strategic” than just pure business as the competition for FPGAs is booming due to increasing demand for data and cloud services. The rise of FPGAs and Intel’s strategy to diversify beyond CPUs FPGAs were first introduced back in the 80s and were considered as an evolution in the path of fabless semiconductors. With each passing year, researchers have been trying to find innovative solutions to improve system performance, to meet the needs of big data, cloud computing, mobile, networking and other domains. FPGA is at the heart of this quest to develop high performing systems and is being paired with CPU’s to facilitate compute-intensive operations. Intel has a Programmable Solutions Group (PSG), which they created after acquiring Altera in 2015 for $16.7 billion. Altera is considered to be one of the leading FPGA manufacturers. The idea behind the eASIC acquisition is to complement Altera chips with eASIC’s technology. Dan McNamara, corporate vice president and GM of the PSG division mentioned in the official announcement, “We’re seeing the largest adoption of FPGA ever because of explosion of data and cloud services, and we think this will give us a lot of differentiation versus the likes of Xilinx”. Xilinx leads the race in the FPGA market with Intel being a distant second. The acquisition of eASIC is seen as a step towards catching up with the market leaders. Intel’s most recent quarterly earnings reports showed that PSG division had earned $498 million with 17% compound annual growth rate (CAGR), whereas on the other hand the company’s biggest division ‘Client Computing Division (CCG) made $8.2 billion but with a CAGR of 3%. Although PSG’s overall revenue is small when compared to CCG, it shows potential in terms of future growth. Hence Intel plans to increase their investments in acquiring futuristic companies like eASIC. It wouldn’t be surprising that we will see more such acquisitions in the coming years. You can visit Intel’s PSG blog for more interesting news on FPGAs. Frenemies: Intel and AMD partner on laptop chip to keep Nvidia at bay Baidu releases Kunlun AI chip, China’s first cloud-to-edge AI chip AMD’s $293 million JV with Chinese chipmaker Hygon starts production of x86 CPUs
Read more
  • 0
  • 0
  • 17376

article-image-laravel-5-7-released-with-support-for-email-verification-improved-console-testing
Prasad Ramesh
06 Sep 2018
3 min read
Save for later

Laravel 5.7 released with support for email verification, improved console testing

Prasad Ramesh
06 Sep 2018
3 min read
Laravel 5.7.0 has been released. The latest version of the PHP framework includes support for email verification, guest policies, dump-server, improved console testing, notification localization, and other changes. The versioning scheme in Laravel maintains the convention—paradigm.major.minor. Major releases are done every six months in February and August. The minor releases may be released every week without breaking any functionality. For LTS releases like Laravel 5.5, bug fixes are provided for two years and security fixes for three years. The LTS releases provide the longest support window. For general releases, bug fixes are done for 6 months and security fixes for a year. Laravel Nova Laravel Nova is a pleasant looking administration dashboard for Laravel applications. The primary feature of Nova is the ability to administer the underlying database records using Laravel Eloquent. Additionally, Nova supports filters, lenses, actions, queued actions, metrics, authorization, custom tools, custom cards, and custom fields. After the upgrade, when referencing the Laravel framework or its components from your application or package, always use a version constraint like 5.7.*, since major releases can have breaking changes. Email Verification Laravel 5.7 introduces an optional email verification for authenticating scaffolding included with the framework. To accommodate this feature, a column called email_verified_at timestamp has been added to the default users table migration that is included with the framework. Guest User Policies In the previous Laravel versions, authorization gates and policies automatically returned false for unauthenticated visitors to your application. Now you can allow guests to pass through authorization checks by declaring an "optional" type-hint or supplying a null default value for the user argument definition. Gate::define('update-post', function (?User $user, Post $post) {    // ... }); Symfony Dump Server Laravel 5.7 offers integration with the dump-server command via a package by Marcel Pociot. To get this started, first run the dump-server Artisan command: php artisan dump-server Once the server starts after this command, all calls to dump will be shown in the dump-server console window instead of your browser. This allows inspection of values without mangling your HTTP response output. Notification Localization Now you can send notifications in a locale other than the set current language. Laravel will even remember this locale if the notification is queued. Localization of many notifiable entries can also be achieved via the Notification facade. Console Testing Laravel 5.7 allows easy "mock" user input for console commands using the expectsQuestion method. Additionally, the exit code can be specified and the text that you expect to be the output via the console command using the assertExitCode and expectsOutput methods. These were some of the major changes covered in Laravel 5.7, for a complete list, visit the Laravel Release Notes. Building a Web Service with Laravel 5 Google App Engine standard environment (beta) now includes PHP 7.2 Perform CRUD operations on MongoDB with PHP
Read more
  • 0
  • 0
  • 17133

article-image-apple-previews-macos-catalina-10-15-beta-featuring-apple-music-tv-apps-security-zsh-shell-driverkit-and-much-more
Amrata Joshi
04 Jun 2019
6 min read
Save for later

Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more!

Amrata Joshi
04 Jun 2019
6 min read
Yesterday, Apple previewed the next version of macOS called Catalina, at its ongoing Worldwide Developers Conference (WWDC) 2019. macOS 10.15 or Catalina comes with new features, apps, and technology for developers. With Catalina, Apple is replacing iTunes with entertainment apps such as  Apple PodcastsApple Music, and the Apple TV app. macOS Catalina is expected to be released this fall. Craig Federighi, Apple’s senior vice president of Software Engineering, said, “With macOS Catalina, we’re bringing fresh new apps to the Mac, starting with new standalone versions of Apple Music, Apple Podcasts and the Apple TV app.” He further added, “Users will appreciate how they can expand their workspace with Sidecar, enabling new ways of interacting with Mac apps using iPad and Apple Pencil. And with new developer technologies, users will see more great third-party apps arrive on the Mac this fall.” What’s new in macOS Catalina Sidecar feature Sidecar is a new feature in the macOS 10.15 that helps users in extending their Mac desktop with the help of iPad as a second display or as a high-precision input device across the creative Mac apps. Users can use their iPad for drawing, sketching or writing in any Mac app that supports stylus input by pairing it with an Apple Pencil. Sidecar can be used for editing video with Final Cut Pro X, marking up iWork documents or drawing with Adobe Illustrator iPad app support Catalina comes with iPad app support which is a new way for developers to port their iPad apps to Mac. Previously, this project was codenamed as “Marzipan,” but it’s now called Catalyst. Developers will now be able to use Xcode for targeting their iPad apps at macOS Catalina. Twitter is planning on porting its iOS Twitter app to Mac, and even Atlassian is planning to bring its Jira iPad app to macOS Catalina. Though it is still not clear how many developers are going to support this porting, Apple is encouraging developers to port their iPad apps to the Mac. https://twitter.com/Atlassian/status/1135631657204166662 https://twitter.com/TwitterSupport/status/1135642794473558017 Apple Music Apple Music is a new music app that will help users discover new music with over 50 million songs, playlists, and music videos. Users will now have access to their entire music library, including the songs they have downloaded, purchased or ripped from a CD. Apple TV Apps The Apple TV app features Apple TV channels, personalized recommendations, and more than 100,000 iTunes movies and TV shows. Users can browse, buy or rent and also enjoy 4K HDR and Dolby Atmos-supported movies. It also comes with a Watch Now section that has the Up Next option, where users can easily keep track of what they are currently watching and then resume on any screen. Apple TV+ and Apple’s original video subscription service will be available in the Apple TV app this fall. Apple Podcasts Apple Podcasts app features over 700,000 shows in its catalog and comes with an option for automatically being notified of new episodes as soon as they become available. This app comes with new categories, curated collections by editors around the world and even advanced search tools that help in finding episodes by the host, guest or even discussion topics. Users can now easily sync their media to their devices using a cable in the new entertainment apps. Security In macOS Catalina, Gatekeeper checks all the apps for known security issues, and the new data protections now require all apps to get permission before accessing user documents. Approve that comes with Apple Watch helps users to approve security prompts by tapping the side button on their Apple Watch. With the new Find My app, it is easy to find the location of a lost or stolen Mac and it can be anonymously relayed back to its owner by other Apple devices, even when offline. Macs will be able to send a secure Bluetooth signal occasionally, which will be used to create a mesh network of other Apple devices to help people track their products. So, a map will populate of where the device is located and this way it will be useful for the users in order to track their device. Also, all the Macs will now come with the T2 Security Chip support Activation Lock which will make them less attractive to thieves. DriverKit MacOS Catalina SDK 10.15+ beta comes with DriverKit framework which can be used for creating device drivers that the user installs on their Mac. Drivers built with DriverKit can easily run in user space for improved system security and stability. This framework also provides C++ classes for IO services, memory descriptors, device matching, and dispatch queues. DriverKit further defines IO-appropriate types for numbers, strings, collections, and other common types. You use these with family-specific driver frameworks like USBDriverKit and HIDDriverKit. zsh shell on Mac With macOS Catalina beta, Mac uses zsh as the interactive shell and the default login shell and is available currently only to the members of the Apple Developer Program. Users can now make zsh as the default in earlier versions of macOS as well. Currently, bash is the default shell in macOS Mojave and earlier. Zsh shell is also compatible with the Bourne shell (sh) and bash. The company is also signalling that developers should start moving to zsh on macOS Mojave or earlier. As bash isn’t a modern shell, so it seems the company thinks that switching to something less aging would make more sense. https://twitter.com/film_girl/status/1135738853724000256 https://twitter.com/_sjs/status/1135715757218705409 https://twitter.com/wongmjane/status/1135701324589256704 Additional features Safari now has an updated start page that uses Siri Suggestions for elevating frequently visited bookmarks, sites, iCloud tabs, reading list selections and links sent in messages. macOS Catalina comes with an option to block an email from a specified sender or even mute an overly active thread and unsubscribe from commercial mailing lists. Reminders have been redesigned and now come with a new user interface that makes it easier for creating, organizing and tracking reminders. It seems users are excited about the announcements made by the company and are looking forward to exploring the possibilities with the new features. https://twitter.com/austinnotduncan/status/1135619593165189122 https://twitter.com/Alessio____20/status/1135825600671883265 https://twitter.com/MasoudFirouzi/status/1135699794360438784 https://twitter.com/Allinon85722248/status/1135805025928851457 To know more about this news, check out Apple’s post. Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users Apple Pay will soon support NFC tags to trigger payments U.S. Supreme Court ruled 5-4 against Apple on its App Store monopoly case
Read more
  • 0
  • 0
  • 16904

article-image-react-native-announces-re-architecture-of-the-framework-for-better-performance
Kunal Chaudhari
22 Jun 2018
4 min read
Save for later

React Native announces re-architecture of the framework for better performance

Kunal Chaudhari
22 Jun 2018
4 min read
React Native, the cross-platform mobile development framework from Facebook is going under a complete rewrite with a focus on better flexibility and improved integration with native infrastructure. Why React Native 5 years ago when React Native was announced at React.js conf 2015, Facebook opened the doors for web developers and engineers who wanted to take their existing skill set into the world of mobile development. Since then React Native has been nothing short of a phenomenon. React Native has come a long way since then, becoming the 13th most popular open source project on Github. React Native came with the promise of revolutionizing the user interface development process with its core set of cross-platform UI primitives, and its popular declarative rendering pattern. Previously, there have been many frameworks which branded themselves as “Cross-Platform” like Ionic and Cordova, but simply put, they just rendered inside a web view, or an “HTML5 app,” or a “hybrid app.” These apps lacked the native feel of an Android/iOS app made with Java/Swift and led to a terrible user experience. React Native, on the other hand, works a bit differently where the User Interface(UI) components are kept in the native block and the business logic is kept in the JavaScript block. At any user interaction/request, the UI block detects the change and sends it to the JavaScript block, which processes the request and sends back the data to the UI block. This allows the UI block to perform with native experience since the processing is done somewhere else. The Dawn Of A New Beginning As cool as these features may sound, working with React Native is quite difficult. If there is a feature that you need to add that is not yet supported by the React Native library, developers have to write their own Native Module in the corresponding language, which can then be linked to the React Native codebase. There are several native modules which are not present in the ecosystem like gesture-handling and native navigation. Complex hacks are required to include them in the native components. For apps with complex integration between React Native and existing app code, this is frustrating. Sophie Alpert, Engineering Manager at Facebook, mentioned in a blog post named State of React 2018, “We’re rewriting many of React Native’s internals, but most of the changes are under the hood: existing React Native apps will continue to work with few or no changes.” This comes with no surprise as clearly Facebook cares about developer experience and hence decided to go ahead with this architectural change with almost no breaking changes. A similar move which was applauded was when they transitioned to React Fiber. This new architectural change is in favor of making the framework more lightweight and better fit into existing native apps involving three major internal changes: New and improved threading Model It will be possible to call synchronously into JavaScript on any thread for high-priority updates while keeping low-priority work off the main thread. New Async Rendering Capabilities This will allow multiple rendering priorities and to simplify asynchronous data handling Lighter and faster bridge Direct calls between native and JavaScript are more efficient and will make it easier to build debugging tools like cross-language stack traces. Along with these architectural changes Facebook also hinted to slim down React Native to make it fit better with the JavaScript ecosystem. This includes making the VM and bundler swappable. React Native is a brilliantly designed cross-platform framework which gave a new dimension to mobile development and a new hope to web developers. Is this restructuring going to cement its place as a top player in the mobile development marketplace? Only time will tell. Till then you can read more about the upcoming changes on their official website. Is React Native is really Native framework? Building VR experiences with React VR 2.0 Jest 23, Facebook’s popular framework for testing React applications is now released
Read more
  • 0
  • 0
  • 16841
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-blazor-0-6-release-and-what-it-means-for-webassembly
Amarabha Banerjee
05 Oct 2018
3 min read
Save for later

Blazor 0.6 release and what it means for WebAssembly

Amarabha Banerjee
05 Oct 2018
3 min read
WebAssembly is changing the way we use develop applications for the web. Graphics heavy applications, browser based games, and interactive data visualizations seem to have found a better way to our UI - the WebAssembly way. The latest Blazor 0.6 experimental release from Microsoft is an indication that Microsoft has identified WebAssembly as one of the upcoming trends and extended support to their bevy of developers. Blazor is an experimental web UI framework based on C#, Razor, and HTML that runs in the browser via WebAssembly. Blazor promises to greatly simplify the task of building, fast and beautiful single-page applications that run in any browser. The following image shows the architecture of Blazor. Source: MSDN Blazor has its own JavaScript format - Blazor.js. It uses mono, an open source implementation of Microsoft’s .NET Framework based on the ECMA standards for C# and the Common Language Runtime (CLR). It also uses Razor, a template engine that combines C# with HTML to create dynamic web content. Together, Blazor is promising to create dynamic and fast web apps without using the popular JavaScript frontend frameworks. This reduces the learning curve requirement for the existing C# developers. Microsoft has released the 0.6 experimental version of Blazor on October 2nd. This release includes new features for authoring templated components and enables using server-side Blazor with the Azure SignalR Service. Another important news from this release is that the server side Blazor model will be included as Razor components in the .Net core 3.0 release. The major highlights of this release are: Templated components Define components with one or more template parameters Specify template arguments using child elements Generic typed components with type inference Razor templates Refactored server-side Blazor startup code to support the Azure SignalR Service Now the important question is how is this release going to fuel the growth of WebAssembly based web development? The answer is that probably it will take some time for WebAssembly to become mainstream because this is just the alpha release which means that there will be plenty of changes before the final release comes. But why Blazor is the right step ahead can be explained by the fact that unlike former Microsoft platforms like Silverlight, it does not have its own rendering engine. Hence pixel rendering in the browser is not its responsibility. That’s what makes it lightweight. Blazor uses the browser’s DOM to display data. However, the C# code running in WebAssembly cannot access the DOM directly. It has to go through JavaScript. The process looks like this presently. Source: Learn Blazor The way this process happens, might change with the beta and subsequent releases of Blazor. Just so that the intermediate JavaScript layer can be avoided. But that’s what WebAssembly is at present. It is a bridge between your code and the browser - which evidently runs on JavaScript. Blazor can prove to be a very good supportive tool to fuel the growth of WebAssembly based apps. Why is everyone going crazy over WebAssembly? Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux Unity Benchmark report approves WebAssembly load times and performance in popular web browsers
Read more
  • 0
  • 0
  • 15991

article-image-puppets-2019-state-of-devops-report-highlight-security-integration-into-devops-practices-result-into-higher-business-outcome
Amrata Joshi
27 Sep 2019
5 min read
Save for later

Puppet’s 2019 State of DevOps Report highlight security integration into DevOps practices result into higher business outcome

Amrata Joshi
27 Sep 2019
5 min read
On Wednesday, Puppet announced the findings of its eighth annual State of DevOps Report. This report reveals practices and patterns that can help organisations in integrating security into the software development lifecycle. As per Puppet’s 2019 State of DevOps Report, 22% of the firms at the highest level of security integration has reached an advanced stage of DevOps maturity, while 6% of the firms are without security integration.  While talking about the firms with an overall ‘significant to full’ integration status, according to the report findings, Europe is ahead of the Asia Pacific regions and the US with 43% in contrast to 38% or less. Alanna Brown, Senior Director of Community and Developer Relations at Puppet and author of the State of DevOps report, said, “The DevOps principles that drive positive outcomes for software development — culture, automation, measurement and sharing — are the same principles that drive positive security outcomes. Organisations that are serious about improving their security practices and posture should start by adopting DevOps practices.”  Brown added, “This year's report affirms our belief that organisations who ignore or deprioritise DevOps, are the same companies who have the lowest level of security integration and who will be hit the hardest in the case of a breach.” Key findings of State of the DevOps Report 2019 According to the report, firms that are at the highest level of security integration can deploy to production on-demand at a higher rate as compared to firms at all other levels of integration. Currently, 61% of firms are able to do so and while comparing with organisations that have not integrated security at all, less than half (49%) can deploy on-demand. According to 82% of survey respondents at firms with the highest level of security integration, security practices and policies to improve their firm’s security posture. While comparing this with respondents at firms without security integration, only 38% had the level of confidence. Firms that are integrating security throughout their lifecycle are more than twice as likely to stop a push to production for a medium security vulnerability. In the middle stages of evolution of security integration, delivery and security teams experience higher friction while collaborating where software delivery slows down and the audit issues increase. The report findings state that friction is higher for respondents who work in security jobs than those who work in non-security jobs. But if they continue working, they will get the results of their hard work faster. Hypothesis on remediation time As per the hypothesis, just 7% of the total respondents can remediate a critical vulnerability in less than an hour.  32% of the total respondents can remediate in one hour to less than one day.  33% of the total respondents can remediate in one day to less than one week.   Michael Stahnke, VP of Platform Engineering, CircleCI, said, “It shouldn’t be a surprise to anyone that integrating security into the software delivery lifecycle requires intentional effort and deep collaboration across teams.” Stahnke added, “What did surprise me, however, was that the practices that promote cross-team collaboration had the biggest impact on the teams’ confidence in the organisation’s security posture. Turns out, empathy and trust aren’t automatable.” Factors responsible for the success of an organizational structure to be DevOps ready The flexibility of the current organizational structure. The organizational culture.  How isolated the different functions are.  Skillsets of your team.  The relationship between team leaders and teams. Best practices for improving security posture Development and security teams collaborate on threat models. Security tools are integrated in the development integration pipeline such that the engineers feel confident that they are not involving any known security problems into their codebases. Security requirements, both functional as well as non-functional should be prioritised as part of the product backlog. Security experts should evaluate automated tests and review changes in high-risk areas of the code like cryptography, authentication systems, etc. Before the deployment, infrastructure-related security policies should be reviewed. Andrew Plato, CEO, Anitian, said, “Puppet’s State of DevOps report provides outstanding insights into the ongoing challenges of integrating security and DevOps teams.”  Plato added, “While the report outlines many problems, it also highlights the gains that arise when DevOps and security are fully integrated. These benefits include increased security effectiveness, more robust risk management, and tighter alignment of business and security goals. These insights mirror our experiences at Anitian implementing our security automation platform. We are proud to be a sponsor of the State of DevOps report as well as a technology partner with Puppet. We anticipate referencing this report regularly in our engagement with our customers as well as the DevOps and security communities.” To summarize, organizations that are focusing on improving their security posture and practices should adopt DevOps practices just as the organizations at the highest levels of DevOps acceptance have fully integrated security practices.  Check out the complete 2019 State of DevOps Report here. Other interesting news in cloud & networking GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020  
Read more
  • 0
  • 0
  • 15380

article-image-10-predictable-findings-stack-overflow-2018-survey
Richard Gall
27 Mar 2018
4 min read
Save for later

10 predictable findings from Stack Overflow's 2018 survey

Richard Gall
27 Mar 2018
4 min read
Stack Overflow’s 2018 survey has just been released. It contains some great insights into the way developers live and work. It’s also confirmed plenty of things that have been commonplaces in the tech industry. So, here are the 10 most predictable findings from this year’s survey... JavaScript is the most popular programming language We probably shouldn’t be surprised - JavaScript has come out as the most popular language in the Stack Overflow survey since 2013. And with JavaScript now cemented as the foundational language of the web, it’s unlikely we’re going to see it knocked off its perch any time soon. With the growth of tools like NodeJS extending JavaScript’s capabilities, it’s only going to consolidate its position further. Developers are on Stack Overflow all the time This is a pretty comforting stat for the team at Stack Overflow - they’re the go-to home of almost everyone that works with code on the planet. 32.5% of respondents claimed they check Stack Overflow ‘daily or almost daily’, 31% check it multiple times a day. A further 22% check the site at least once a week. Developers are self-taught and write code outside of work This is something we’re well aware of at Packt - developers are autodidacts. That’s probably partly the nature of working with software that can be so changeable and the nature of developers themselves - curious and exploratory. 86.7% of survey respondents taught themselves a new language, tool or framework outside of any formal setting. The fact that formal education is so rarely sought or provided for explains the fact that so many developers write code outside of work. Stack Overflow’s 2018 survey reports than more than 80% of developers code as a hobby. Git is easily the most popular version control tool This was a sure thing - and the survey stats prove it. 87.2% of respondents reported Git as their version control tool of choice. Despite murmurs online that it’s unnecessarily complicated (the search suggestion next to ‘why is Git so popular’ is ‘why is Git so complicated’), thanks to its powerful scalable features and, of course, the dominance of GitHub in the lives of technical people, it’s become a core part of developer life. Python is the fastest growing programming language on the planet We know just how popular Python is at Packt. We joke that we should just be the Python Publishing Company. This year it has risen above C#, after last year rising above PHP. Where will its rise end? Or, as more and more people need to write code, will it become the de facto language of technology-powered professionals? TensorFlow is getting lots of love TensorFlow was 2018’s most loved tool with more than 73% of developers using it expressing a great deal of enthusiasm for it. Clearly its delivering on its tagline: ‘machine learning for everyone’. React is the most in-demand tool If TensorFlow is the most loved tool, we also weren’t that surprised to see that React is massively in demand. 21.3% of developers who don’t currently use it expressed an interest in using it. Node.js was in second place with 20.9%. The love for TensorFlow must be getting out there as that came in third here, with 15.5% of respondents interested in developing with it. People hate using Visual Basic It’s not hard to find someone complaining about Visual Basic online. The ecosystem is widening, and there are far more attractive options for developers out there. Microsoft are even self-consciously changing their strategy with VB, as this post from last year outlines. Stack Overflow reports that this is the third year in a row that Visual Basic has come out as the ‘most dreaded’ programming language, meaning people using it ‘express no interest in doing so’. Job churn is a fact of life in tech 34.6% of survey respondents changed jobs less than a year ago. A further 22% changed jobs between 1 and 2 years ago. Like technology itself, change is a constant in the lives of many developers - for better and worse. Agile and Scrum are the most popular software development methodologies No surprises here - these methodologies have become embedded in the everyday practices of millions of developers.
Read more
  • 0
  • 0
  • 14969

article-image-these-2-software-skills-subscription-services-will-save-you-time-and-cash
Richard Gall
18 Jun 2018
2 min read
Save for later

These 2 software skills subscription services will save you time - and cash

Richard Gall
18 Jun 2018
2 min read
Staying up to date with software is hard. This year's Skill Up report confirms that, with 43% of developers claiming that a lack of quality learning resources were a big barrier to their organization reaching its goals. It's not that there aren't learning resources are out there - there are. It's just that finding the right ones can take some time. And when you've got a million things to do and deadlines yesterday, searching for the best deal feels like a bit of a waste of your day. 2 subscription services. 3 months. $30. Luckily, 2 subscription services have teamed up to help you stay up to date in tech and prepared for future trends and tools as they emerge. Mapt and SitePoint both offer an impressive range of software learning resources - combined, they provide anyone in software with the reassurance that they have immediate access to insight and guidance on the latest tools in their field. From Monday 18 June to Sunday 24 June, you'll be able to get 3 months of Mapt - that's full access to all Packt's eBooks and videos - and SitePoint Premium for just $30. That's a whole lot of content for the price of just one book. Learn more about the offer. What you get with Mapt... Access to over 6,000 eBooks & Videos Over 100 new courses added each month Over 1,000 technologies and tools covered Curated Skill Plans to help direct your learning Skill Assessments to reinforce your learning Up to 2 FREE eBooks & Video tokens a month to keep forever Sounds good, right? But there's a whole lot more thanks to SitePoint... What you get with SitePoint Premium... 123+ DRM Free eBooks and Courses Access to over 123 courses New and trending web development content released monthly Join a community of 35,000+ members Course helpers assistance The offer ends on Sunday 24 June, so don't waste time - a comprehensive duo of software learning subscriptions are just a few clicks away...  Get your 3 month subscription to Mapt and SitePoint Premium
Read more
  • 0
  • 0
  • 14835
Matthew Emerick
12 May 2020
9 min read
Save for later

Kali Linux 2020.2 Release from Kali Linux

Matthew Emerick
12 May 2020
9 min read
Despite the turmoil in the world, we are thrilled to be bringing you an awesome update with Kali Linux 2020.2! And it is available for immediate download. A quick overview of what’s new since January: KDE Plasma Makeover & Login PowerShell by Default. Kind of. Kali on ARM Improvements Lessons From The Installer Changes New Key Packages & Icons Behind the Scenes, Infrastructure Improvements KDE Plasma Makeover & Login With XFCE and GNOME having had a Kali Linux look and feel update, it’s time to go back to our roots (days of backtrack-linux) and give some love and attention to KDE Plasma. Introducing our dark and light themes for KDE Plasma: On the subject of theming, we have also tweaked the login screen (lightdm). It looks different, both graphically and the layout (the login boxes are aligned now)! PowerShell by Default. Kind of. A while ago, we put PowerShell into Kali Linux’s network repository. This means if you wanted powershell, you had to install the package as a one off by doing: kali@kali:~$ sudo apt install -y powershell We now have put PowerShell into one of our (primary) metapackages, kali-linux-large. This means, if you choose to install this metapackage during system setup, or once Kali is up and running (sudo apt install -y kali-linux-large), if PowerShell is compatible with your architecture, you can just jump straight into it (pwsh)! PowerShell isn’t in the default metapackage (that’s kali-linux-default), but it is in the one that includes the default and many extras, and can be included during system setup. Kali on ARM Improvements With Kali Linux 2020.1, desktop images no longer used “root/toor” as the default credentials to login, but had moved to “kali/kali”. Our ARM images are now the same. We are no longer using the super user account to login with. We also warned back in 2019.4 that we would be moving away from a 8GB minimum SD card, and we are finally ready to pull the trigger on this. The requirement is now 16GB or larger. One last note on the subject of ARM devices, we are not installing locales-all any more, so we highly recommend that you set your locale. This can be done by running the following command, sudo dpkg-reconfigure locales, then log out and back in. Lessons From Installer Changes With Kali Linux 2020.1 we announced our new style of images, “installer” & “live”. Issue It was intended that both “installer” & “live” could be customized during setup, to select which metapackage and desktop environment to use. When we did that, we couldn’t include metapackages beyond default in those images, as it would create too large of an ISO. As the packages were not in the image, if you selected anything other than the default options it would require network access to obtain the missing packages beyond default. After release, we noticed some users selecting “everything” and then waiting hours for installs to happen. They couldn’t understand why the installs where taking so long. We also have used different software on the back end to generate these images, and a few bugs slipped through the cracks (which explains the 2020.1a and 2020.1b releases). Solutions We have removed kali-linux-everything as an install time option (which is every package in the Kali Linux repository) in the installer image, as you can imagine that would have taken a long time to download and wait for during install We have cached kali-linux-large & every desktop environment into the install image (which is why its a little larger than previous to download) – allowing for a COMPLETE offline network install We have removed customization for “live” images – the installer switched back to copying the content of the live filesystem allowing again full offline install but forcing usage of our default XFCE desktop Summary If you are wanting to run Kali from a live image (DVD or USB stick), please use “live” If you are wanting anything else, please use “installer” If you are wanting anything other than XFCE as your desktop environment, please use “installer” If you are not sure, get “installer” Also, please keep in mind on an actual assessment “more” is not always “better”. There are very few reasons to install kali-linux-everything, and many reasons not too. To those of you that were selecting this option, we highly suggest you take some time and educate yourself on Kali before using it. Kali, or any other pentest distribution, is not a “turn key auto hack” solution. You still need to learn your platform, learn your tools, and educate yourself in general. Consider what you are really telling Kali to do when you are installing kali-linux-everything. Its similar to if you went into your phones app store and said “install everything!”. Thats likely not to have good results. We provide a lot of powerful tools and options in Kali, and while we may have a reputation of “Providing machine guns to monkeys”, but we actually expect you to know what you are doing. Kali is not going to hold your hand. It expects you to do the work of learning and Kali will be unforgiving if you don’t. New Key Packages & Icons Just like every Kali Linux release, we include the latest packages possible. Key ones to point out this release are: GNOME 3.36 – a few of you may have noticed a bug that slipped in during the first 12 hours of the update being available. We’re sorry about this, and have measures in place for it to not happen again Joplin – we are planning on replacing CherryTree with this in Kali Linux 2020.3! Nextnet Python 3.8 SpiderFoot For the time being, as a temporary measure due to certain tools needing it, we have re-included python2-pip. Python 2 has now reached “End Of Life” and is no longer getting updated. Tool makers, please, please, please port to Python 3. Users of tools, if you notice that a tool is not Python 3 yet, you can help too! It is not going to be around forever. Whilst talking about packages, we have also started to refresh our package logos for each tool. You’ll notice them in the Kali Linux menu, as well as the tools page on GitLab(more information on this coming soon!) If your tool has a logo and we have missed it, please let us know on the bug tracker. WSLconf WSLconf happened earlier this year, and @steev gave a 35 minute talk on “How We Use WSL at Kali“. Go check it out! Behind the Scenes, Infrastructure Improvements We have been celebrating the arrival of new servers, which over the last few weeks we have been migrating too. This includes a new ARM build server and what we use for package testing. This may not be directly noticeable, but you may reap the benefits of it! If you are wanting to help out with Kali, we have added a new section to our documentation showing how to submit a autopkgtest. Feedback is welcome! Kali Linux NetHunter We were so excited about some of the work that has been happening with NetHunter recently, we already did a mid-term release to showcase them and get it to you as quick as possible. On top of all the previous NetHunter news there is even more to announce this time around! Nexmon support has been revived, bringing WiFi monitor support and frame injection to wlan0 on the Nexus 6P, Nexus 5, Sony Xperia Z5 Compact, and more! OpenPlus 3T images have been added to the download page. We have crossed 160 different kernels in our repository, allowing NetHunter to support over 64 devices! Yes, over 160 kernels and over 64 devices supported. Amazing. Our documentation page has received a well deserved refresh, especially the kernel development section. One of the most common questions to come in about NetHunter is “What device should I run it on?”. Keep your eye on this page to see what your options are on an automatically updated basis! When you think about the amount of power NetHunter provides in such a compact package, it really is mind blowing. Its been amazing to watch this progress, and the entire Kali team is excited to show you what is coming in the future. Download Kali Linux 2020.2 Fresh images So what are you waiting for? Start downloading already! Seasoned Kali users are already aware of this, but for the ones who are not, we do also produce weekly builds that you can use as well. If you can’t wait for our next release and you want the latest packages when you download the image, you can just use the weekly image instead. This way you’ll have fewer updates to do. Just know these are automated builds that we don’t QA like we do our standard release images. Existing Upgrades If you already have an existing Kali installation, remember you can always do a quick update: kali@kali:~$ echo "deb http://http.kali.org/kali kali-rolling main non-free contrib" | sudo tee /etc/apt/sources.list kali@kali:~$ kali@kali:~$ sudo apt update && sudo apt -y full-upgrade kali@kali:~$ kali@kali:~$ [ -f /var/run/reboot-required ] && sudo reboot -f kali@kali:~$ You should now be on Kali Linux 2020.2. We can do a quick check by doing: kali@kali:~$ grep VERSION /etc/os-release VERSION="2020.2" VERSION_ID="2020.2" VERSION_CODENAME="kali-rolling" kali@kali:~$ kali@kali:~$ uname -v #1 SMP Debian 5.5.17-1kali1 (2020-04-21) kali@kali:~$ kali@kali:~$ uname -r 5.5.0-kali2-amd64 kali@kali:~$ NOTE: The output of uname -r may be different depending on the system architecture. As always, should you come across any bugs in Kali, please submit a report on our bug tracker. We’ll never be able to fix what we don’t know is broken! And Twitter is not a Bug Tracker!
Read more
  • 0
  • 0
  • 14788

article-image-bloomberg-says-google-mastercard-covertly-track-customers-offline-retail-habits-via-a-secret-million-dollar-ad-deal
Melisha Dsouza
31 Aug 2018
3 min read
Save for later

Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal

Melisha Dsouza
31 Aug 2018
3 min read
Google and Mastercard have apparently signed a deal that was kept as a secret from most of the two billion Mastercard holders. The deal allows Google to track users’ offline buying habits. The search engine giant has been stalking offline purchases made in stores through Mastercard purchase histories and correlating them with online ad interactions. Both companies haven’t released an official statement about the partnership to the public about the arrangement. In May 2017, Google announced a service called “Store Sales Measurement”, which recorded about 70 percent of US credit and debit card transactions through third-party partnerships.  Selected Google advertisers had access to this new tool, which tracked whether the ads they ran online led to a sale at a physical store in the U.S. As reported by Bloomberg, an anonymous source familiar to the deal stated that Mastercard also provided Google with customers’ transaction data thus contributing to the 70% share. It’s highly probable that other credit card companies, also contribute the data of their customer transactions Advertisers spend lavishly on Google to gain valuable insights into the link between digital ads, a website visit or an online purchase. This supports the speculations that the deal is profitable for Google. How do they track how you shop? A customer logs into his/her Google account on the web and clicks on any Google ad.  They may often browse a certain item without purchasing it right away. Within 30 days, if he/she uses their MasterCard to buy the same item in a physical store, Google will send the advertiser a report about the product and the effectiveness of its ads, along with a section for “offline revenue” letting the advertiser know the retail sales. All of this raises the question on how much does Google actually know about your personal details? Both Google and Mastercard have clarified to The Verge that the data is anonymized in order to protect personally identifiable information. However, Google declined to confirm the deal with Mastercard. A Google spokesperson released a statement  to MailOnline saying: "Before we launched this beta product last year, we built a new, double-blind encryption technology that prevents both Google and our partners from viewing our respective users’ personally identifiable information. We do not have access to any personal information from our partners’ credit and debit cards, nor do we share any personal information with our partners. Google users can opt-out with their Web and App Activity controls, at any time.” This new controversy closely follows the heels of an earlier debacle last week when it was discovered that Google is providing advertisers with location history data collated from Google Maps and other more granular data points collected by its Android operating system. But this data never helped in understanding whether a customer actually purchased a product. Toggling off "Web and App Activity"  (enabled by default) will help in turning this feature off. The category also controls whether Google can pinpoint your exact GPS coordinates through Maps data and browser searches and whether it can crosscheck a customer's offline purchases with their online ad-related activity. Read more in-depth coverage on this news first reported at Bloomberg. Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology Google’s Protect your Election program: Security policies to defend against state-sponsored phishing attacks, and influence campaigns Google Titan Security key with secure FIDO two factor authentication is now available for purchase  
Read more
  • 0
  • 0
  • 13295

article-image-facebook-is-reportedly-working-on-threads-app-an-extension-of-instagrams-close-friends-feature-to-take-on-snapchat
Amrata Joshi
02 Sep 2019
3 min read
Save for later

Facebook is reportedly working on Threads app, an extension of Instagram's 'Close friends' feature to take on Snapchat

Amrata Joshi
02 Sep 2019
3 min read
Facebook is seemingly working on a new messaging app called Threads that would help users to share their photos, videos, location, speed, and battery life with only their close friends, The Verge reported earlier this week. This means users can selectively share content with their friends while not revealing to others the list of close friends with whom the content is shared. The app currently does not display the real-time location but it might notify by stating that a friend is “on the move” as per the report by The Verge. How do Threads work? As per the report by The Verge,  Threads app appears to be similar to the existing messaging product inside the Instagram app. It seems to be an extension of the ‘Close friends’ feature for Instagram stories where users can create a list of close friends and make their stories just visible to them.  With Threads, users who have opted-in for ‘automatic sharing’ of updates will be able to regularly show their status updates and real-time information  in the main feed to their close friends.. The auto-sharing of statuses will be done using the mobile phone sensors.  Also, the messages coming from your friends would appear in a central feed, with a green dot that will indicate which of your friends are currently active/online. If a friend has posted a story recently on Instagram, you will be able to see it even from Threads app. It also features a camera, which can be used to capture photos and videos and send them to close friends. While Threads are currently being tested internally at Facebook, there is no clarity about the launch of Threads. Direct’s revamped version or Snapchat’s potential competitor? With Threads, if Instagram manages to create a niche around the ‘close friends’, it might shift a significant proportion of Snapchat’s users to its platform.  In 2017, the team had experimented with Direct, a standalone camera messaging app, which had many filters that were similar to Snapchat. But this year in May, the company announced that they will no longer be supporting Direct. Threads look like a Facebook’s second attempt to compete with Snapchat. https://twitter.com/MattNavarra/status/1128875881462677504 Threads app focus on strengthening the ‘close friends’ relationships might promote more of personal data sharing including even location and battery life. This begs the question: Is our content really safe? Just three months ago, Instagram was in the news for exposing personal data of millions of influencers online. The exposed data included contact information of Instagram influencers, brands and celebrities https://twitter.com/hak1mlukha/status/1130532898359185409 According to Instagram’s current Terms of Use, it does not get ownership over the information shared on it. But here’s the catch, it also states that it has the right to host, use, distribute, run, modify, copy, publicly perform or translate, display, and create derivative works of user content as per the user’s privacy settings. In essence, the platform has a right to use the content we post.  Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong protests   Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules
Read more
  • 0
  • 0
  • 13015
article-image-announcing-cloud-build-googles-new-continuous-integration-and-delivery-ci-cd-platform
Vijin Boricha
27 Jul 2018
2 min read
Save for later

Announcing Cloud Build, Google’s new continuous integration and delivery (CI/CD) platform

Vijin Boricha
27 Jul 2018
2 min read
In today’s world no software developer is expected to wait for long release time and development cycles, all thanks to DevOps. Cloud which are popular for providing feasible infrastructure across different organizations can now offer better solutions with the help of DevOps. Applications can have bug fixes and updates almost everyday but this update cycles require a CI/CD framework. Google recently released its all new continuous integration/continuous delivery framework Cloud Build at Google Cloud Next ’18 in San Francisco. Cloud Build is a complete continuous integration and continuous delivery platform that helps you build software at scale across all languages. It gives developers complete control over a variety of environments such as VMs, serverless, Firebase or Kubernetes. Google’s Cloud Build supports Docker, giving developers the option of automating deployments to Google Kubernetes Engine or Kubernetes for continuous delivery. It also supports the use of triggers for application deployment which helps launch an update whenever certain conditions are met. Google also tried to eliminate the pain of managing build servers by providing a free version of Cloud Build with up to 120 build minutes per day including up to 10 concurrent builds. After the user has exhausted the first free 120 build minutes, additional build minutes will be charged at $0.0034 per minute. Another plus point of Cloud Build is that it automatically identifies package vulnerabilities before deployment along with allowing users to run builds on local machines and later deploy in the cloud. Incase of issues or problems, CloudBuild provides detailed insights letting you ease debugging via build errors and warnings. It also provides an option of filtering build results using tags or queries to identify time consuming tests or slow performing builds. Key features of Google Cloud Build Simpler and faster commit to deploy time Supports language agnostic builds Options to create pipelines to automate deployments Flexibility to define custom workflow Control build access with Google Cloud security Check out the Google Cloud Blog if you find want to learn more about how to start implementing Google's CI/CD offerings. Related Links Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Google’s event-driven serverless platform, Cloud Function, is now generally available Google Cloud Launches Blockchain Toolkit to help developers build apps easily
Read more
  • 0
  • 0
  • 12450

Matthew Emerick
01 Oct 2020
10 min read
Save for later

Store and Access Time Series Data at Any Scale with Amazon Timestream – Now Generally Available from AWS News Blog

Matthew Emerick
01 Oct 2020
10 min read
Time series are a very common data format that describes how things change over time. Some of the most common sources are industrial machines and IoT devices, IT infrastructure stacks (such as hardware, software, and networking components), and applications that share their results over time. Managing time series data efficiently is not easy because the data model doesn’t fit general-purpose databases. For this reason, I am happy to share that Amazon Timestream is now generally available. Timestream is a fast, scalable, and serverless time series database service that makes it easy to collect, store, and process trillions of time series events per day up to 1,000 times faster and at as little as to 1/10th the cost of a relational database. This is made possible by the way Timestream is managing data: recent data is kept in memory and historical data is moved to cost-optimized storage based on a retention policy you define. All data is always automatically replicated across multiple availability zones (AZ) in the same AWS region. New data is written to the memory store, where data is replicated across three AZs before returning success of the operation. Data replication is quorum based such that the loss of nodes, or an entire AZ, does not disrupt durability or availability. In addition, data in the memory store is continuously backed up to Amazon Simple Storage Service (S3) as an extra precaution. Queries automatically access and combine recent and historical data across tiers without the need to specify the storage location, and support time series-specific functionalities to help you identify trends and patterns in data in near real time. There are no upfront costs, you pay only for the data you write, store, or query. Based on the load, Timestream automatically scales up or down to adjust capacity, without the need to manage the underlying infrastructure. Timestream integrates with popular services for data collection, visualization, and machine learning, making it easy to use with existing and new applications. For example, you can ingest data directly from AWS IoT Core, Amazon Kinesis Data Analytics for Apache Flink, and Amazon MSK. You can visualize data stored in Timestream from Amazon QuickSight, and use Amazon SageMaker to apply machine learning algorithms to time series data, for example for anomaly detection. You can use Timestream fine-grained AWS Identity and Access Management (IAM) permissions to easily ingest or query data from an AWS Lambda function. We are providing the tools to use Timestream with open source platforms such as Apache Kafka, Telegraf, Prometheus, and Grafana. Using Amazon Timestream from the Console In the Timestream console, I select Create database. I can choose to create a Standard database or a Sample database populated with sample data. I proceed with a standard database and I name it MyDatabase. All Timestream data is encrypted by default. I use the default master key, but you can use a customer managed key that you created using AWS Key Management Service (KMS). In that way, you can control the rotation of the master key, and who has permissions to use or manage it. I complete the creation of the database. Now my database is empty. I select Create table and name it MyTable. Each table has its own data retention policy. First data is ingested in the memory store, where it can be stored from a minimum of one hour to a maximum of a year. After that, it is automatically moved to the magnetic store, where it can be kept up from a minimum of one day to a maximum of 200 years, after which it is deleted. In my case, I select 1 hour of memory store retention and 5 years of magnetic store retention. When writing data in Timestream, you cannot insert data that is older than the retention period of the memory store. For example, in my case I will not be able to insert records older than 1 hour. Similarly, you cannot insert data with a future timestamp. I complete the creation of the table. As you noticed, I was not asked for a data schema. Timestream will automatically infer that as data is ingested. Now, let’s put some data in the table! Loading Data in Amazon Timestream Each record in a Timestream table is a single data point in the time series and contains: The measure name, type, and value. Each record can contain a single measure, but different measure names and types can be stored in the same table. The timestamp of when the measure was collected, with nanosecond granularity. Zero or more dimensions that describe the measure and can be used to filter or aggregate data. Records in a table can have different dimensions. For example, let’s build a simple monitoring application collecting CPU, memory, swap, and disk usage from a server. Each server is identified by a hostname and has a location expressed as a country and a city. In this case, the dimensions would be the same for all records: country city hostname Records in the table are going to measure different things. The measure names I use are: cpu_utilization memory_utilization swap_utilization disk_utilization Measure type is DOUBLE for all of them. For the monitoring application, I am using Python. To collect monitoring information I use the psutil module that I can install with: pip3 install psutil Here’s the code for the collect.py application: import time import boto3 import psutil from botocore.config import Config DATABASE_NAME = "MyDatabase" TABLE_NAME = "MyTable" COUNTRY = "UK" CITY = "London" HOSTNAME = "MyHostname" # You can make it dynamic using socket.gethostname() INTERVAL = 1 # Seconds def prepare_record(measure_name, measure_value): record = { 'Time': str(current_time), 'Dimensions': dimensions, 'MeasureName': measure_name, 'MeasureValue': str(measure_value), 'MeasureValueType': 'DOUBLE' } return record def write_records(records): try: result = write_client.write_records(DatabaseName=DATABASE_NAME, TableName=TABLE_NAME, Records=records, CommonAttributes={}) status = result['ResponseMetadata']['HTTPStatusCode'] print("Processed %d records. WriteRecords Status: %s" % (len(records), status)) except Exception as err: print("Error:", err) if __name__ == '__main__': session = boto3.Session() write_client = session.client('timestream-write', config=Config( read_timeout=20, max_pool_connections=5000, retries={'max_attempts': 10})) query_client = session.client('timestream-query') dimensions = [ {'Name': 'country', 'Value': COUNTRY}, {'Name': 'city', 'Value': CITY}, {'Name': 'hostname', 'Value': HOSTNAME}, ] records = [] while True: current_time = int(time.time() * 1000) cpu_utilization = psutil.cpu_percent() memory_utilization = psutil.virtual_memory().percent swap_utilization = psutil.swap_memory().percent disk_utilization = psutil.disk_usage('/').percent records.append(prepare_record('cpu_utilization', cpu_utilization)) records.append(prepare_record( 'memory_utilization', memory_utilization)) records.append(prepare_record('swap_utilization', swap_utilization)) records.append(prepare_record('disk_utilization', disk_utilization)) print("records {} - cpu {} - memory {} - swap {} - disk {}".format( len(records), cpu_utilization, memory_utilization, swap_utilization, disk_utilization)) if len(records) == 100: write_records(records) records = [] time.sleep(INTERVAL) I start the collect.py application. Every 100 records, data is written in the MyData table: $ python3 collect.py records 4 - cpu 31.6 - memory 65.3 - swap 73.8 - disk 5.7 records 8 - cpu 18.3 - memory 64.9 - swap 73.8 - disk 5.7 records 12 - cpu 15.1 - memory 64.8 - swap 73.8 - disk 5.7 . . . records 96 - cpu 44.1 - memory 64.2 - swap 73.8 - disk 5.7 records 100 - cpu 46.8 - memory 64.1 - swap 73.8 - disk 5.7 Processed 100 records. WriteRecords Status: 200 records 4 - cpu 36.3 - memory 64.1 - swap 73.8 - disk 5.7 records 8 - cpu 31.7 - memory 64.1 - swap 73.8 - disk 5.7 records 12 - cpu 38.8 - memory 64.1 - swap 73.8 - disk 5.7 . . . Now, in the Timestream console, I see the schema of the MyData table, automatically updated based on the data ingested: Note that, since all measures in the table are of type DOUBLE, the measure_value::double column contains the value for all of them. If the measures were of different types (for example, INT or BIGINT) I would have more columns (such as measure_value::int and measure_value::bigint) . In the console, I can also see a recap of which kind measures I have in the table, their corresponding data type, and the dimensions used for that specific measure: Querying Data from the Console I can query time series data using SQL. The memory store is optimized for fast point-in-time queries, while the magnetic store is optimized for fast analytical queries. However, queries automatically process data on all stores (memory and magnetic) without having to specify the data location in the query. I am running queries straight from the console, but I can also use JDBC connectivity to access the query engine. I start with a basic query to see the most recent records in the table: SELECT * FROM MyDatabase.MyTable ORDER BY time DESC LIMIT 8 Let’s try something a little more complex. I want to see the average CPU utilization aggregated by hostname in 5 minutes intervals for the last two hours. I filter records based on the content of measure_name. I use the function bin() to round time to a multiple of an interval size, and the function ago() to compare timestamps: SELECT hostname, bin(time, 5m) as binned_time, avg(measure_value::double) as avg_cpu_utilization FROM MyDatabase.MyTable WHERE measure_name = 'cpu_utilization' AND time > ago(2h) GROUP BY hostname, bin(time, 5m) When collecting time series data you may miss some values. This is quite common especially for distributed architectures and IoT devices. Timestream has some interesting functions that you can use to fill in the missing values, for example using linear interpolation, or based on the last observation carried forward. More generally, Timestream offers many functions that help you to use mathematical expressions, manipulate strings, arrays, and date/time values, use regular expressions, and work with aggregations/windows. To experience what you can do with Timestream, you can create a sample database and add the two IoT and DevOps datasets that we provide. Then, in the console query interface, look at the sample queries to get a glimpse of some of the more advanced functionalities: Using Amazon Timestream with Grafana One of the most interesting aspects of Timestream is the integration with many platforms. For example, you can visualize your time series data and create alerts using Grafana 7.1 or higher. The Timestream plugin is part of the open source edition of Grafana. I add a new GrafanaDemo table to my database, and use another sample application to continuously ingest data. The application simulates performance data collected from a microservice architecture running on thousands of hosts. I install Grafana on an Amazon Elastic Compute Cloud (EC2) instance and add the Timestream plugin using the Grafana CLI. $ grafana-cli plugins install grafana-timestream-datasource I use SSH Port Forwarding to access the Grafana console from my laptop: $ ssh -L 3000:<EC2-Public-DNS>:3000 -N -f ec2-user@<EC2-Public-DNS> In the Grafana console, I configure the plugin with the right AWS credentials, and the Timestream database and table. Now, I can select the sample dashboard, distributed as part of the Timestream plugin, using data from the GrafanaDemo table where performance data is continuously collected: Available Now Amazon Timestream is available today in US East (N. Virginia), Europe (Ireland), US West (Oregon), and US East (Ohio). You can use Timestream with the console, the AWS Command Line Interface (CLI), AWS SDKs, and AWS CloudFormation. With Timestream, you pay based on the number of writes, the data scanned by the queries, and the storage used. For more information, please see the pricing page. You can find more sample applications in this repo. To learn more, please see the documentation. It’s never been easier to work with time series, including data ingestion, retention, access, and storage tiering. Let me know what you are going to build! — Danilo
Read more
  • 0
  • 0
  • 12086
Modal Close icon
Modal Close icon