Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-gnu-octave-5-1-0-releases-with-new-changes-and-improvements
Natasha Mathur
04 Mar 2019
3 min read
Save for later

GNU Octave 5.1.0 releases with new changes and improvements

Natasha Mathur
04 Mar 2019
3 min read
GNU Octave team released version 5.1.0 of the popular high-level programming language, last week. GNU Octave 5.1.0 comes with general improvements, dependencies, and other changes. What’s new in GNU Octave 5.1.0? General Improvements The Octave plotting system in GNU Octave 5.1.0 supports high-resolution screens (the ones with greater than 96 DPI such as HiDPI/Retina monitors). There’s a newly added Unicode character support for files and folders in Windows. The fsolve function is modified to use larger step sizes while calculating the Jacobian of a function with finite differences, thereby, leading to faster convergence. The ranks function is recoded for performance and has now become 25X faster. It also supports a third argument that can specify resolving the ranking of tie values. Another function randi has also been recoded to produce an unbiased (all results are equally likely) sample of integers. The function isdefinite now returns true or false instead of -1, 0, or 1. The intmax, intmin, and flintmax functions can now accept a variable as input. There is no longer a need for path handling functions to perform variable or brace expansion on path elements. Also, Octave’s load-path is no longer subject to these expansions. A new printing device is available, "-ddumb", that can produce ASCII art for plots. This device has been made available only with the gnuplot toolkit. Other Changes Dependencies: The GUI now requires Qt libraries in GNU Octave 5.1.0. The minimum Qt4 version that is supported is Qt4.8.The OSMesa library is no longer used. To print invisible figures while using OpenGL graphics, the Qt QOFFSCREENSURFACE feature must be available. The FFTW library should be able to perform FFT calculations. The FFTPACK sources are removed from Octave. Matlab Compatibility: The functions such as issymmetric and ishermitian now accept an option "nonskew" or "skew" for calculating the symmetric or skew-symmetric property of a matrix. The issorted function can now use a direction option of "ascend" or "descend". You can now use clear with no arguments and it will remove only local variables from the current workspace. Global variables will no longer be visible, but will exist in the global workspace. Graphic Objects: Figure graphic objects in GNU Octave 5.1.0 now have a new property "Number" which is read-only and that can return the handle (number) of the figure. But if "IntegerHandle" is set to "off" then the property will return an empty matrix []. Patch and surface graphic objects can now use the "FaceNormals" property for flat lighting. "FaceNormals" and "VertexNormals" can now be calculated only when necessary to improve graphics performance. The "Margin" property of text-objects has a new default of 3 rather than 2. For the complete list of changes, check out the official GNU Octave 5.1.0 release notes. GNU Health Federation message and authentication server drops MongoDB and adopts PostgreSQL Bash 5.0 is here with new features and improvements GNU ed 1.15 released!
Read more
  • 0
  • 0
  • 4526

Matthew Emerick
13 May 2020
9 min read
Save for later

Create and deploy a Custom Vision predictive service in R with AzureVision from Revolutions

Matthew Emerick
13 May 2020
9 min read
The AzureVision package is an R frontend to Azure Computer Vision and Azure Custom Vision. These services let you leverage Microsoft’s Azure cloud to carry out visual recognition tasks using advanced image processing models, with minimal machine learning expertise. The basic idea behind Custom Vision is to take a pre-built image recognition model supplied by Azure, and customise it for your needs by supplying a set of images with which to update it. All model training and prediction is done in the cloud, so you don’t need a powerful machine of your own. Similarly, since you are starting with a model that has already been trained, you don’t need a very large dataset or long training times to obtain good predictions (ideally). This article walks you through how to create, train and deploy a Custom Vision model in R, using AzureVision. Creating the resources You can create the Custom Vision resources via the Azure portal, or in R using the facilities provided by AzureVision. Note that Custom Vision requires at least two resources to be created: one for training, and one for prediction. The available service tiers for Custom Vision are F0 (free, limited to 2 projects for training and 10k transactions/month for prediction) and S0. Here is the R code for creating the resources: library(AzureVision) # insert your tenant, subscription, resgroup name and location here rg <- AzureRMR::get_azure_login(tenant)$ get_subscription(sub_id)$ create_resource_group(rg_name, location=rg_location) # insert your desired Custom Vision resource names here res <- rg$create_cognitive_service(custvis_resname, service_type="CustomVision.Training", service_tier="S0") pred_res <- rg$create_cognitive_service(custvis_predresname, service_type="CustomVision.Prediction", service_tier="S0") Training Custom Vision defines two different types of endpoint: a training endpoint, and a prediction endpoint. Somewhat confusingly, they can both use the same hostname, but with different URL paths and authentication keys. To start, call the customvision_training_endpoint function with the service URL and key. url <- res$properties$endpoint key <- res$list_keys()[1] endp <- customvision_training_endpoint(url=url, key=key) Custom Vision is organised hierarchically. At the top level, we have a project, which represents the data and model for a specific task. Within a project, we have one or more iterations of the model, built on different sets of training images. Each iteration in a project is independent: you can create (train) an iteration, deploy it, and delete it without affecting other iterations. In turn, there are three different types of projects: A multiclass classification project is for classifying images into a set of tags, or target labels. An image can be assigned to one tag only. A multilabel classification project is similar, but each image can have multiple tags assigned to it. An object detection project is for detecting which objects, if any, from a set of candidates are present in an image. Let’s create a classification project: testproj <- create_classification_project(endp, "testproj", export_target="standard") Here, we specify the export target to be standard to support exporting the final model to one of various standalone formats, eg TensorFlow, CoreML or ONNX. The default is none, in which case the model stays on the Custom Vision server. The advantage of none is that the model can be more complex, resulting in potentially better accuracy. Adding and tagging images Since a Custom Vision model is trained in Azure and not locally, we need to upload some images. The data we’ll use comes from the Microsoft Computer Vision Best Practices project. This is a simple set of images containing 4 kinds of objects one might find in a fridge: cans, cartons, milk bottles, and water bottles. download.file( "https://cvbp.blob.core.windows.net/public/datasets/image_classification/fridgeObjects.zip", "fridgeObjects.zip" ) unzip("fridgeObjects.zip") The generic function to add images to a project is add_images, which takes a vector of filenames, Internet URLs or raw vectors as the images to upload. It returns a vector of image IDs, which are how Custom Vision keeps track of the images it uses. Let’s upload the fridge objects to the project. The method for classification projects has a tags argument which can be used to assign labels to the images as they are uploaded. We’ll keep aside 5 images from each class of object to use as validation data. cans <- dir("fridgeObjects/can", full.names=TRUE) cartons <- dir("fridgeObjects/carton", full.names=TRUE) milk <- dir("fridgeObjects/milk_bottle", full.names=TRUE) water <- dir("fridgeObjects/water_bottle", full.names=TRUE) # upload all but 5 images from cans and cartons, and tag them can_ids <- add_images(testproj, cans[-(1:5)], tags="can") carton_ids <- add_images(testproj, cartons[-(1:5)], tags="carton") If you don’t tag the images at upload time, you can do so later with add_image_tags: # upload all but 5 images from milk and water bottles milk_ids <- add_images(testproj, milk[-(1:5)]) water_ids <- add_images(testproj, water[-(1:5)]) add_image_tags(testproj, milk_ids, tags="milk_bottle") add_image_tags(testproj, water_ids, tags="water_bottle") Other image functions to be aware of include list_images, remove_images, and add_image_regions (which is for object detection projects). A useful one is browse_images, which takes a vector of IDs and displays the corresponding images in your browser. browse_images(testproj, water_ids[1:5]) Training the model Having uploaded the data, we can train the Custom Vision model with train_model. This trains the model on the server and returns a model iteration, which is the result of running the training algorithm on the current set of images. Each time you call train_model, for example to update the model after adding or removing images, you will obtain a different model iteration. In general, you can rely on AzureVision to keep track of the iterations for you, and automatically return the relevant results for the latest iteration. mod <- train_model(testproj) We can examine the model performance on the training data with the summary method. For this toy problem, the model manages to obtain a perfect fit. summary(mod) Obtaining predictions from the trained model is done with the predict method. By default, this returns the predicted tag (class label) for the image, but you can also get the predicted class probabilities by specifying type="prob". validation_imgs <- c(cans[1:5], cartons[1:5], milk[1:5], water[1:5]) validation_tags <- rep(c("can", "carton", "milk_bottle", "water_bottle"), each=5) predicted_tags <- predict(mod, validation_imgs) table(predicted_tags, validation_tags) ## validation_tags ## predicted_tags can carton milk_bottle water_bottle ## can 4 0 0 0 ## carton 0 5 0 0 ## milk_bottle 1 0 5 0 ## water_bottle 0 0 0 5 This shows that the model got 19 out of 20 predictions correct on the validation data, misclassifying one of the cans as a milk bottle. Deployment Publishing to a prediction resource The code above demonstrates using the training endpoint to obtain predictions, which is really meant only for model testing and validation. In a production setting, we would normally publish a trained model to a Custom Vision prediction resource. Among other things, a user with access to the training endpoint has complete freedom to modify the model and the data, whereas access to the prediction endpoint only allows getting predictions. Publishing a model requires knowing the Azure resource ID of the prediction resource. Here, we’ll use the resource object that we created earlier; you can also obtain this information from the Azure Portal. # publish to the prediction resource we created above publish_model(mod, "iteration1", pred_res) Once a model has been published, we can obtain predictions from the prediction endpoint in a manner very similar to previously. We create a predictive service object with classification_service, and then call the predict method. Note that a required input is the project ID; you can supply this directly or via the project object. It may also take some time before a published model shows up on the prediction endpoint. Sys.sleep(60) # wait for Azure to finish publishing pred_url <- pred_res$properties$endpoint pred_key <- pred_res$list_keys()[1] pred_endp <- customvision_prediction_endpoint(url=pred_url, key=pred_key) project_id <- testproj$project$id pred_svc <- classification_service(pred_endp, project_id, "iteration1") # predictions from prediction endpoint -- same as before predsvc_tags <- predict(pred_svc, validation_imgs) table(predsvc_tags, validation_tags) ## validation_tags ## predsvc_tags can carton milk_bottle water_bottle ## can 4 0 0 0 ## carton 0 5 0 0 ## milk_bottle 1 0 5 0 ## water_bottle 0 0 0 5 Exporting as standalone As an alternative to deploying the model to an online predictive service resource, for example if you want to create a custom deployment solution, you can also export the model as a standalone object. This is only possible if the project was created to support exporting. The formats supported include: ONNX 1.2 CoreML TensorFlow or TensorFlow Lite A Docker image for either the Linux, Windows or Raspberry Pi environment Vision AI Development Kit (VAIDK) To export the model, simply call export_model and specify the target format. This will download the model to your local machine. export_model(mod, "tensorflow") More information AzureVision is part of the AzureR family of packages. This provides a range of tools to facilitate access to Azure services for data scientists working in R, such as AAD authentication, blob and file storage, Resource Manager, container services, Data Explorer (Kusto), and more. If you are interested in Custom Vision, you may also want to check out CustomVision.ai, which is an interactive frontend for building Custom Vision models.
Read more
  • 0
  • 0
  • 4421

article-image-ruby-ends-support-for-its-2-3-series
Amrata Joshi
16 Apr 2019
2 min read
Save for later

Ruby ends support for its 2.3 series

Amrata Joshi
16 Apr 2019
2 min read
Last month, the team at Ruby announced that support for Ruby 2.3 series has ended. Security and bug fixes from the recent Ruby versions won’t be backported to Ruby 2.3. As there won’t be any patches of 2.3, the Ruby team has recommended users to upgrade to Ruby 2.6 or 2.5 as soon as possible. Currently supported Ruby versions Ruby 2.6 series Ruby 2.6 series is currently in the normal maintenance phase. The team will backport bug fixes and will release an urgent fix for it in case of urgent security issue/bug. Ruby 2.5 series Ruby 2.5 series is currently in the normal maintenance phase. The team will backport bug fixes and will release an urgent fix for it in case of urgent security issue/bug. Ruby 2.4 series Ruby 2.4 series is currently in security maintenance phase. The team won’t backport any bug fixes to 2.4 except for security fixes. The team will release an urgent fix for it in case of urgent security issue/bug. The team is also planning to end the support for Ruby 2.4 series by March 31, 2020. To know more about this news, check out the post by Ruby. How Deliveroo migrated from Ruby to Rust without breaking production Ruby on Rails 6.0 Beta 1 brings new frameworks, multiple DBs, and parallel testing Ruby 2.6.0 released with a new JIT compiler
Read more
  • 0
  • 0
  • 4359

article-image-crystal-0-28-0-released-with-improved-language-ranges-library-lookup-and-more
Amrata Joshi
19 Apr 2019
2 min read
Save for later

Crystal 0.28.0 released with improved language, ranges, library lookup and more

Amrata Joshi
19 Apr 2019
2 min read
Yesterday, the team at Crystal released Crystal 0.28.0, a new version of the general-purpose, object-oriented programming language. This release comes with improvements to language, library, networking and much more. What’s new in Crystal 0.28.0 Enums Enums are declared with one line per each member. In the previous versions, users could use spaces or commas, but in this version, users have to use a semicolon. The formatter will now migrate commas to a semicolon. Improved ranges Sometimes users don’t know where to start or finish, from this release users can now understand it better with the help of ranges as they have been categorized as begin-less and end-less ranges. Library lookup The team at Crystal has worked towards simplifying how some libraries and static libraries are looked up and therefore can be overridden in case it is needed. In this release, an env var CRYSTAL_LIBRARY_PATH is used in the process of determining the location of libraries to link to. Numbers now in human readable format In this release, numbers can now be printed in a human-readable form with the help of Number#humanize, Int#humanize_bytes and Number#format. Networking The team has improved HTTP and URI and have made it easy for users to migrate to the new setup. Issues in the URI implementation have been fixed. Collections The team has dropped Iterator#rewind. Users can implement #cycle by storing elements in an array. Bug fixes The issues in the compiler have now been fixed and even the errors in some code constructs have been handled. Issues related to method lookup have been fixed. Type inference has been improved. The team has worked on the error messages, they have been improved. To know more about this news, check out Crystal’s post. Crystal 0.27.0 released Qt Creator 4.9.0 released with language support, QML support, profiling and much more Redox OS 0.50 released with support for Cairo, Pixman, and other libraries and packages    
Read more
  • 0
  • 0
  • 4267

Matthew Emerick
01 Oct 2020
3 min read
Save for later

.NET Framework October 1, 2020 Cumulative Update Preview Update for Windows 10, version 2004 and Windows Server, version 2004 from .NET Blog

Matthew Emerick
01 Oct 2020
3 min read
Today, we are releasing the September 2020 Cumulative Update Preview Updates for .NET Framework. Quality and Reliability This release contains the following quality and reliability improvements. ASP.NET Disabled resuse of AppPathModifier in ASP.Net control output. HttpCookie objects in the ASP.Net request context will be created with configured defaults for cookie flags instead instead of .Net.NET-style primitive defaults to match the behavior of `new HttpCookie(name)`. CLR1 Added a CLR config variable Thread_AssignCpuGroups (1 by default) that can be set to 0 to disable automatic CPU group assignment done by the CLR for new threads created by Thread.Start() and thread pool threads, such that an app may do its own thread-spreading. Addressed a rare data corruption that can occur when using new API’s such as Unsafe.ByteOffset which are often used with the new Span types. The corruption could occur when a GC operation is performed while a thread is calling Unsafe.ByteOffset from inside of a loop. Addressed an issue regarding timers with very long due times ticking down much sooner than expected when the AppContext switch “Switch.System.Threading.UseNetCoreTimer” is enabled. SQL Addressed a failure that sometimes occured when a user connected to one Azure SQL database, performed an enclave based operation, and then connected to another database under the same server that has the same Attestation URL and performed an enclave operation on the second server. WCF2 Addressed an issue with WCF services sometimes failing to start when starting multiple services concurrently. Windows Forms Addressed a regression introduced in .NET Framework 4.8, where Control.AccessibleName, Control.AccessibleRole, and Control.AccessibleDescription properties stopped working for the following controls:Label,GroupBox,ToolStrip,ToolStripItems,StatusStrip,StatusStripItems,PropertyGrid,ProgressBar,ComboBox,MenuStrip,MenuItems,DataGridView. Addressed a regression in accessible name for combo box items for data bound combo boxes. .NET Framework 4.8 RTM started using type name instead of the value of the DisplayMember property as an accessible name, this fiximprovement uses the DisplayMember again. 1 Common Language Runtime (CLR) 2 Windows Communication Foundation (WCF) Getting the Update The Cumulative Update Preview is available via Windows Update and Microsoft Update Catalog. Microsoft Update Catalog You can get the update via the Microsoft Update Catalog. For Windows 10, NET Framework 4.8 updates are available via Windows Update and Microsoft Update Catalog. Updates for other versions of .NET Framework are part of the Windows 10 Monthly Cumulative Update. **Note**: Customers that rely on Windows Update will automatically receive the .NET Framework version-specific updates. Advanced system administrators can also take use of the below direct Microsoft Update Catalog download links to .NET Framework-specific updates. Before applying these updates, please ensure that you carefully review the .NET Framework version applicability, to ensure that you only install updates on systems where they apply. The following table is for Windows 10 and Windows Server 2016+ versions. Product Version Cumulative Update Windows 10 2004 and Windows Server, version 2004 .NET Framework 3.5, 4.8 Catalog 4576945   Previous Cumulative Updates The last few .NET Framework updates are listed below for your convenience: .NET Framework September 2020 Security and Quality Rollup Updates .NET Framework September 3, 2020 Cumulative Update Preview for Windows 10 2004 and Windows Server, version 2004 .NET Framework August 2020 Cumulative Update Preview .NET Framework August 2020 Security and Quality Rollup Updates The post .NET Framework October 1, 2020 Cumulative Update Preview Update for Windows 10, version 2004 and Windows Server, version 2004 appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 4164

article-image-the-packt-top-10-for-10
Packt Editorial Staff
19 Nov 2018
5 min read
Save for later

The Packt top 10 for $10

Packt Editorial Staff
19 Nov 2018
5 min read
Right now, every eBook and every video is just $10 each on the Packt store. Need somewhere to get started? Here’s our Black Friday top ten for just $10. Deep Reinforcement Learning Hands-On Reinforcement learning is the hottest topic in the area of AI research. The technique allows a machine learning agent grow through trial and error in an interactive environment. Just like a human, it builds its intelligence and understanding by learning from its experiences. In Deep Reinforcement Learning Hands-On, expert author Maxim Lapan reveals the reinforcement learning methods responsible for paradigm-shifting AI such as Google’s AlphaGo Zero. Filling the gaps between theory and practice, this book is focused on practical insight on how reinforcement learning works - hands-on! Find out more. The Modern C++ Challenge “I would recommend this to anyone” ★★★★ Amazon Review Take on the modern C++ challenge! Designed to hone and test your C++ skills, The Modern C++ Challenge consists of a stack of programming problems for developers of all levels. These problems don’t just test your knowledge of the language, but your skill as a programmer. Think outside the box to come up with the answers, and don’t worry. If you’re ever stumped, we've got the best solutions to the problems right in the book. So are you up for the challenge? Learn more. Angular 6 for Enterprise-Ready Web Applications The demands of modern business for powerful and reliable web applications is huge. In Angular 6 for Enterprise-Ready Web Applications, software development expert and conference speaker Doguhan Uluca takes you through a hands-on and minimalist approach to designing and architecting high quality Angular apps. More than just a technical manual, this book introduces Enterprise-level project delivery methods. Use Kanban to focus on value delivery, communicate design ideas with mock-up tools and build great looking apps with Angular Material. Find out more. Mastering Blockchain - Second Edition “I love this book and have recommended it to everyone I know who is interested in Blockchain. I also teach Blockchain at the graduate school level and have used this book in my course development and teaching...quite simply, there is nothing better on the market.” ★★★★★ Amazon Review 2018 has been the year that Blockchain and cryptocurrency hit the mainstream. Fully updated and revised from the bestselling first edition, Mastering Blockchain is dedicated to showing you how to put this revolutionary technology into implementation in the real world. Develop Ethereum applications, discover Blockchain for business frameworks, build Internet of Things apps using Blockchain - and more. The possibilities are endless. Find out more. Mastering Linux Security and Hardening Network engineer or systems administrator? You need this book. In one 378 page volume, you’ll be equipped with everything you need to know to deliver a Linux system that’s resistant to being hacked. Fill your arsenal with security techniques including SSH hardening, network service detection, setting up firewalls, encrypting file systems, and protecting user accounts. When you’re done, you’ll have a fortress that will be much, much harder to compromise. Find out more. Mastering Go The CEO of Shopify famously said “Go will be the server language of the future.” Mastering Go shows you how to deliver on that promise. Take your Go skills beyond the basics and learn how to integrate them with production code. Filled with details on the interplay of systems and networking code, Mastering Go will get you writing server-level code that plays well in all environments. Learn more. Mastering Machine Learning Algorithms From financial trading to your Netflix recommendations, machine learning algorithms rule modern life. But whilst each algorithm is often a highly-prized secret, all are often built upon a core algorithmic theory. Mastering Machine Learning Algorithms is your complete guide to quickly getting to grips with popular machine learning algorithms. You will be introduced to the most widely used algorithms in supervised, unsupervised, and semi-supervised machine learning, and will learn how to use them in the best possible manner. If you are looking for a single resource to study, implement, and solve end-to-end machine learning problems and use-cases, this is the book you need. Find out more. Learn Qt 5 Cross-platform development is a big promise. Qt goes beyond the basics of ‘runs on Android and iOS’ or ‘works on Windows and Linux’. If you build your app with Qt it’s truely cross-platform, offering intuitive and easy GUIs for everything from mobile and desktop, to Internet of Things, automotive devices and embedded apps. Learn Qt 5 gives hands-on coverage of the suite of essential techniques that will empower you to progress from a blank page to shipped Qt application. Write your Qt application once, then deploy it to multiple operating systems with ease. Learn more. Microservice Patterns and Best Practices Microservices empower your organization to deliver applications continuously and with agility. But the proper architecture of microservices-based applications can be tricky. Microservice Patterns and Best Practices show you the absolute best way to build and structure your microservices. Start making the right choices at the application development stage, and learn how to cut your monolithic app down into manageable chunks. Find out more. Natural Language Processing with TensorFlow In Natural Language Processing with TensorFlow, chief data scientist Thushan Ganegedara unravels the complexities of natural language processing. An expert on working with untested data, Thushan gives you invaluable tools to tackle immense and unstructured data volumes. Processing your raw corpus is key to effective deep learning. Let Thushan show you how with NLP and Python’s most popular deep learning library. Learn more.
Read more
  • 0
  • 0
  • 4161
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-gradle-5-0-released-with-faster-builds-incremental-java-compilation-and-annotation-processing
Amrata Joshi
27 Nov 2018
4 min read
Save for later

Gradle 5.0 released with faster builds, incremental java compilation, and annotation processing

Amrata Joshi
27 Nov 2018
4 min read
The team at Gradle has now released Gradle 5.0 after Gradle 4.9 was released in July this year. Gradle 5.0 is faster, safer and more capable than the previous ones. Gradle is a build tool which accelerates developer productivity as it helps teams build, automate and deliver software faster. This tool focuses on build automation and support for multi-language development. Improvements in Gradle 5.0 Gradle 5.0 comes incremental compilation and annotation processing to enhance caching and up-to-date checking. Gradle 5.0 also brings features such as Kotlin DSL, dependency version alignment, version locking, task timeouts, Java 11 support, and more. The Kotlin DSL helps the IDE users in code completion and refactoring. Faster builds with build cache Users can experience faster builds the moment they upgrade to Gradle 5.0. Gradle 5.0 allows developers and business executives to build only what is needed by using the build cache and incremental processing features. The build cache reuses the results of previous executions and makes the process faster. It also reduces the build time by approximately 90%. Incremental Java compilation and annotation processing Gradle 5.0 features an incremental compiler. Now, there is no need for CompileJava tasks to recompile all the source files except for the the first time This compiler is default in this version and is highly optimized. It also supports incremental annotation processing which increases the effectiveness of incremental compilation in the presence of annotation processors. Users have to upgrade to the latest version (5.0) of the processors to experience the annotation processing. The new annotationProcessor configuration is used to manage the annotation processors and for putting them on the annotation processor path. Fine-grained transitive dependency management Gradle 5.0 comes with new features for customizing dependencies and features for improved POM and BOM support. Gradle 5.0 supports dependency constraints that are used to define versions or version ranges to restrict direct and transitive dependency versions. In this version, the platform definitions or Maven BOM dependencies are natively supported which allows the use of Spring Boot platform definition without using an external plugin. The dependency alignment aligns the modules in a logical group. With this release, the dynamic dependency versions can now be locked for better build reproducibility. This version can import bill of materials (BOM) files. Writing Gradle build logic Users can now write Gradle build scripts in Kotlin. The functionality of Static-typing in Kotlin allows tools to provide better IDE assistance to the users. More memory efficient Gradle execution The lower memory requirements and cache cleanup reduces Gradle’s overhead on the system. In Gradle 5.0, many caching mechanisms have been optimized for reducing the default memory for Gradle processes. New Gradle invocation options This version supports JUnit 5: JUnit Platform, JUnit Jupiter, and JUnit Vintage which helps in enabling test grouping and filtering. The tasks for non-interactive environments like continuous integration execution group the log messages. It’s now easy to identify if a test has failed with arich command-line console as it shows a colored build status. One can now work on interdependent projects with the help of composite builds in Gradle 5.0. This release of Gradle supports custom arguments which help in running Java applications faster and easier. New Gradle task and plugin APIs This version of Gradle features a new Worker API for safe parallel and asynchronous execution. Gradle 5.0’s new Configuration Avoidance APIs allow  users to configure projects together. The task timeout API helps to specify a timeout duration for a task, after which it will be interrupted. Custom CLI args in Gradle 5.0 helps the users to configure their custom tasks.   To know more about Gradle 5.0. check out Gradle’s official blog. Gradle 4.9 released! Android Studio 3.2 Beta 5 out, with updated Protobuf Gradle plugin Setting Gradle properties to build a project [Tutorial]
Read more
  • 0
  • 0
  • 4038

article-image-soundation-releases-its-first-music-studio-built-on-webassembly
Savia Lobo
16 Nov 2018
2 min read
Save for later

Soundation releases its first music studio built on WebAssembly

Savia Lobo
16 Nov 2018
2 min read
Soundation, online music production software, released their new music studio built on WebAssembly Threads, after working closely with Google. It is the first music production software to run on WebAssembly Threads, which contributes to considerably improved speed, performance, and stability when producing music in a browser. Its online music studio is used by over 80,000 creatives who produce music directly in their web browsers. For Soundation’s users, the WebAssembly technology provides an improved performance on multicore machines, between 100-300 percent*, according to measurements. Soundation has been collaborating with Google’s WASM and Chrome Audio teams for over a year, working to optimize the implementation of Soundation Studio based on WebAssembly, with support of multithreading and shared memory. Adam Hasslert, CEO, Soundation, said, “Implementing WebAssembly Threads is a key part of our mission to build the next-generation music production service online. This technology will have a significant impact on how web apps are made in the future, and it’s essential for us to lead this development and offer our users the most powerful alternative.” Thomas Nattestad at CDS, Product Manager, WebAssembly, said, “Soundation is one of the first adopters of WebAssembly Threads. They use these Threads to achieve fast, parallelized processing to seamlessly mix songs. Adding just a single Thread doubled their performance, and by the time they added five threads, they more than tripled their performance.” How did Soundation conduct the tests? Soundation made tests of complex Soundation Studio project (consisting of 10 audio tracks, 12 synthesizers, 270 audio regions with audio samples and notes with 84 filter effects applied) to generate the audio file. The test was later run on Ubuntu 16.04, Chrome 72.0.3584.0 (64-bit) on board they had Intel® Core™ i7-6700HQ. They then compared systems based on WebAssembly, PNaCL, native application using different processing buffer sizes in ring buffer. WebAssembly version has been tested using different number of threads. Here’s a video by Thomas Nattestad, the Product Manager for WebAsssembly, introducing Soundation. https://www.youtube.com/watch?v=zgOGZgAPUjQ&feature=youtu.be&t=474 Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web
Read more
  • 0
  • 0
  • 3819

article-image-microsoft-makes-f-4-6-and-f-tools-for-visual-studio-2019-generally-available
Bhagyashree R
03 Apr 2019
2 min read
Save for later

Microsoft makes F# 4.6 and F# tools for Visual Studio 2019 generally available

Bhagyashree R
03 Apr 2019
2 min read
Last week, Microsoft announced the general availability of F# 4.6 and F# tools for Visual Studio 2019. This release comes with a new record type called Anonymous Records and also few updates in the F# Core library. F# 4.6 and F# tools for Visual Studio 2019 For the updates and development of new features in F# 4.6, the team followed an open RFC process. Writing named record types in F# was not really easy in previous versions and to address exactly that a new type is introduced called Anonymous Records. These F# record types do not have any explicit name and can be declared in an ad-hoc fashion. Updates in F# Core library In the F# Core library, updates are made to the 'ValueOption' type. With this release, a new attribute is added called DebuggerDisplay that helps in debugging. The IsNone, IsSome, None, Some, op_Implicit, and ToString members are added. In addition to these updates, there is now a 'ValueOption' module, which has the same functions the Option module has. F# tools for Visual Studio 2019 A lot of focus has been put on improving the performance of F# tools for Visual Studio, especially for larger solutions. Previously, F# compiler and tools struggled when used for larger solutions and caused a lot of memory and CPU usage. To address this problem the team has done few updates in the F# parser, reduced the cache sizes, significantly reduced the allocations when processing format strings, and more. This release also comes with a new feature that intelligently idents pasted code based on where your cursor is. You can use this feature by turning on Smart Indent via Tools > Options > Text Editor > F# > Tabs > Smart this will be on automatically. Read the entire list of updates in F# 4.6 and F# tools for Visual Studio 2019 on Microsoft’s blog. Microsoft releases TypeScript 3.4 with an update for faster subsequent builds, and more Microsoft, Adobe, and SAP share new details about the Open Data Initiative Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript
Read more
  • 0
  • 0
  • 3805

article-image-introducing-gitpod-a-one-click-ide-for-github
Bhagyashree R
05 Apr 2019
3 min read
Save for later

Introducing Gitpod, a one-click IDE for GitHub

Bhagyashree R
05 Apr 2019
3 min read
Today, Sven Efftinge, the Technical Co-founder of Gitpod.io, announced the launch of Gitpod, a cloud IDE that tightly integrates with GitHub. Along with the launch, starting from today, the Gitpod app is also available on GitHub marketplace. What is Gitpod? While working on a project, a lot of time goes into switching contexts between projects and branches, setting up a development environment, or simply waiting for the build to complete. To reduce this time and effort, Gitpod provides developers disposable, ready-to-code development environments for their GitHub projects. What are its advantages? Automatically pre-builts every commit Gitpod, similar to continuous integration tools, automatically pre-builds every commit. So, when you open a Gitpod workspace you will not only find the code and tools ready but also that the build has already finished. Easily go back to previous releases A Gitpod workspace is configured through a .gitpod.yml file written in YAML. This file is versioned with your code, so if at some point, you need to go back to old releases, you can easily do that. Pre-installed VS Code extensions You will get several VS Code extensions pre-installed in Gitpod such as Go support from Microsoft’s own extension. The team plans to add more VS Code extensions in the near future and later developers will be allowed to define any extensions they want. Supports full-featured terminals In addition to supporting one of the best code editors, Gitpod comes with full-featured terminals that are backed by Linux container running in the cloud. So, you get the same command-line tools you would use locally. Better collaboration Gitpod supports two major features for collaboration: Sharing running workspaces: This feature allows you to share a workspace with a remote colleague. It comes handy when you want to hunt down a bug together or do some pair programming. Snapshots: With this feature, you can take an immutable copy of your dev environment at any point in time and share the link wherever you want. Users will receive an exact clone of the environment including all state and even UI layout. How you can use Gitpod? For creating a workspace you have two options: You can prefix any GitHub URL with gitpod.io/#. You can also use the Gitpod browser extension available for Chrome and Firefox users, which adds a button to GitHub that does the prefixing for you. You can watch the following video to know exactly how Gitpod works: https://www.youtube.com/watch?v=D41zSHJthZI Read more in detail on Gitpod’s official website. Introducing git/fs: A native git client for Plan 9 ‘Developers’ lives matter’: Chinese developers protest over the “996 work schedule” on GitHub Sublime Text 3.2 released with Git integration, improved themes, editor control and much more!
Read more
  • 0
  • 0
  • 3781
article-image-programming-news-bulletin-thursday-19-april
Richard Gall
19 Apr 2018
2 min read
Save for later

Programming news bulletin - Thursday 19 April

Richard Gall
19 Apr 2018
2 min read
Welcome to this week's programming bulletin. There are a number of new releases that should interest fans of Spring, a preview of the next installment of .NET Core and Entity Framework and an update from JetBrains. Remember to watch this space every Thursday for more programming updates.  Programming news from the Packt Hub New features in C# 8.0 https://hub.packtpub.com/exciting-new-features-in-c-8-0/ Programming news from across the web Python launches an updated PyPi; the legacy version is set to close April 30. Spring Cloud Stream 2.0 is now on general availability. The emerging Spring framework, which helps users build event-driven microservices capable of scaling quickly, has just announced the general release of version 2.0. With 'a complete revamp of content-type negotiation functionality to address performance, flexibility, and... consistency', plus many more exciting features, this could help push the framework forward in the world of microservices development. Chaos Monkey now available for Spring Boot . In case you don't know, Chaos Monkey is a tool for testing large-scale distributed systems that is inspired by the principles of Chaos engineering. In this instance, it works as a small library that you integrate as a dependency within your application. It then attacks various components of your app - like a monkey causing chaos. Microsoft announces a preview of .NET Core 2.1 and Entity Framework 2.1. The .NET engineering team have made a preview of the latest versions of .NET Core and Entity Framework available - the team expects to reach a final full release in the next few months. Javalin 1.6.0 released. The new release features some improvements to performance and async requests. Jetbrains releases IntelliJ IDEA 2018.1.1 https://blog.jetbrains.com/idea/2018/04/intellij-idea-2018-1-1-is-released/  
Read more
  • 0
  • 0
  • 3553

article-image-wine-4-0-released-with-vulkan-direct3d-support-among-other-features
Sugandha Lahoti
23 Jan 2019
3 min read
Save for later

Wine 4.0 released with Vulkan, Direct3D support among other features

Sugandha Lahoti
23 Jan 2019
3 min read
Wine 4.0 stable version has been released yesterday. It comes with four main features including support for Vulkan, Direct3D 12, Game controllers and High-DPI support on Android. In total, there are over 6,000 individual changes and improvements. Wine is an implementation of the Windows Application Programming Interface (API) library. makes it possible to run Windows programs alongside Linux or any other Unix-like operating system. Wine can also be used to recompile a program into a format that Linux can understand more easily, though access to the Windows program source code is required. Major improvements in Wine 4.0 Direct3D 12 support Wine 4.0 provides initial support for Direct3D 12 and requires the vkd3d library and a Vulkan-capable graphics card. The Direct3D graphics card database recognizes more graphics cards. The Multi-Threaded Command Stream feature is enabled by default. The OpenGL core contexts are always used by default when available to all graphics cards, and all versions of Direct3D before 12. Several Direct3D 11 interfaces have been updated to version 11.2, and DXGI interfaces have been updated to version 1.6. Support for using the correct swap interval is implemented, for both DXGI and DirectDraw applications. Application-configurable frame latency is implemented for Direct3D 9Ex and DXGI applications. Vulkan Support In Wine 4.0, Vulkan driver is implemented, using the host Vulkan libraries under X11, or MoltenVK on macOS. Wine 4.0 also provides a built-in vulkan-1 loader as an alternative to the SDK loader. A number of Direct2D interfaces have been updated to version 1.2. Other features: ARGB visual can be used as default X11 visual. The old 16-bit DIB.DRV driver is implemented using the DIB engine. For large polygons, polygon drawing is much faster in the DIB engine. Improvements made in Kernel Support for running DOS binaries under Wine is removed. In wine 4.0, all the CPU control and debug registers can be accessed by kernel drivers, including on 64-bit. Events, semaphores, mutexes, and timers are also implemented in kernel mode for device drivers. The WaitOnAddress synchronization primitives are supported. Application settings, compatibility information, and execution levels are also recognized in application manifests. Other changes Wine 4.0 supports the new version of the Android graphics buffer allocator API to enable graphics support on Android version 8 and above. Android x86-64 platforms are supported also in 64-bit mode. New external dependencies The Vulkan library is used to implement the Vulkan graphics driver. The Vkd3d library is used to implement Direct3D 12 on top of Vulkan. The SDL library is used to support game controllers. The GSSAPI library is used to implement Kerberos authentication. These are a select few changes. For a full list of improvements and additions, check out the release notes. Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem Homebrew 1.9.0 released with periodic brew cleanup, beta support for Linux, Windows and more. Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool
Read more
  • 0
  • 0
  • 3519

article-image-graalvm-19-0-releases-with-java-8-se-compliant-java-virtual-machine
Bhagyashree R
13 May 2019
2 min read
Save for later

GraalVM 19.0 releases with Java 8 SE compliant Java Virtual Machine, and more!

Bhagyashree R
13 May 2019
2 min read
Last week, the team behind GraalVM announced the release of GraalVM 19.0. This is the first production release, which comes with early adopter Windows support, class initialization update in GraalVM Native Image, Java 8 SE compliant Java Virtual Machine, and more. https://twitter.com/graalvm/status/1126607204860289024 GraalVM is a polyglot virtual machine that allows users to run applications written in JavaScript, Python, Ruby, R, JVM-based languages like Java, Scala, Kotlin, Clojure, and LLVM-based languages such as C and C++. Updates in GraalVM 19.0 GraalVM Native Image GraalVM Native Image is responsible for compiling Java code ahead-of-time to a standalone executable called a native image. Currently, it is available as an early adopter plugin and you can install it by executing the ‘gu install native-image’ command. With this release, Native Image is updated in how classes are initialized in a native-image. The application classes are now initialized at runtime by default and all the JDK classes are initialized at the build time. This change was made to improve user experience, as it eliminates the need to write substitutions and to deal with instances of unsupported classes ending up in the image heap. Early adopter Windows support With this release, early adopter builds for Windows users are also made available. These builds include the JDK with the GraalVM compiler enabled, Native Image capabilities, and GraalVM’s JavaScript engine and the developer tools. Java 8 SE compliant Java VM This release comes with Java 8 SE compliant Java Virtual Machine, which is based on OpenJDK 1.8.0_212. Read also: No more free Java SE 8 updates for commercial use after January 2019 Node.js with polyglot capabilities This release comes with Node.js with polyglot capabilities, based on Node.js 10.15.2. With these capabilities, you will be able to leverage Java or Scala libraries from Node.js and also use Node.js modules in Java applications. JavaScript engine compliant with ECMAScript 2019 GraalVM 19.0 comes with JavaScript engine compliant with the latest ECMAScript 2019 standard. You can now migrate from JavaScript engines Rhino or Nashorn, which are no longer maintained, to GraalVM’s JavaScript engine compatible with the latest standards. Check out the GraalVM 19.0 release notes for more details. OpenJDk team’s detailed message to NullPointerException and explanation in JEP draft Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more What’s new in ECMAScript 2018 (ES9)?
Read more
  • 0
  • 0
  • 3491
article-image-introducing-netcap-a-framework-for-secure-and-scalable-network-traffic-analysis
Amrata Joshi
24 Dec 2018
5 min read
Save for later

Introducing Netcap, a framework for secure and scalable network traffic analysis

Amrata Joshi
24 Dec 2018
5 min read
Last week, a new traffic analysis framework, Netcap (NETwork CAPture) was released. It converts a stream of network packets into accessible type-safe structured data for representing specific protocols or custom abstractions. https://twitter.com/dreadcode/status/1076267396577533952 This project was implemented in Go programming language that provides a garbage collected memory safe runtime as parsing of untrusted input could be dangerous. It was developed for a series of experiments like filtering, dataset labeling, encoding, error logging, etc in the thesis: Implementation and evaluation of secure and scalable anomaly-based network intrusion detection. The Netcap project won the second place at Kaspersky Labs SecurIT Cup 2018 in Budapest. Why was Netcap introduced? Corporate communication networks are attacked frequently with previously unseen malware or insider threats, which makes defense mechanisms such as anomaly-based intrusion detection systems necessary for detecting security incidents. The signature-based and anomaly detection strategies rely on features extracted from the network traffic that requires secure and extensible collection strategies. The solutions that are available are written in low-level system programming languages that require manual memory management and suffer from vulnerabilities that allow a remote attacker to disable the network monitor. Others lack in terms of flexibility and data availability. To tackle these problems and ease future experiments with anomaly-based detection techniques, Netcap was released. Netcap uses Google's protocol buffers for encoding its output which helps in accessing it across a wide range of programming languages. The output can also be emitted as comma separated values, which is a common input format for data analysis tools and systems. Netcap is extensible and it provides multiple ways of adding support for new protocols and also implements the parsing logic in a memory safe way. It provides high dimensional data of observed traffic and allows the researcher to focus on new approaches for detecting malicious behavior in network environments, instead of opting data collection mechanisms and post-processing steps. It features a concurrent design that makes use of multi-core architectures. This command-line tool focuses on usability and readability and displays progress when processing packets. Why Go? Go, commonly referred to as Golang, is a statically typed programming language which was released by Google in 2009. Netcap opted Go as its syntax is similar to the C programming language and also has a lot of adopted ideas from other languages, such as Python and Erlang. It is commonly used for network programming and backend implementation. With Go Netcap can compile faster and generate statically linked binaries, easily. Goroutine, an asynchronous process is multiplexed onto threads of the OS as required. In case a goroutine blocks, the corresponding OS thread blocks as well, but the other goroutines aren’t affected. So, this proves to be helpful in Netcap as it doesn’t disturb the functioning. Also, Goroutines are less expensive as compared to a thread and allocate resources dynamically as needed. Since, Go offers channels as a lightweight way to communicate between goroutines, the synchronization and messaging process gets easier in Netcap. Design Goals of Netcap Netcap provides memory safety when parsing untrusted input. It features ease of extension. The output format is interoperable with many different programming languages. It features concurrent design. It comes with output with small storage footprint on disk. It provides with maximum data availability. It allows implementation of custom abstractions It comes with a rich platform and architecture support Future Scope Future development on Netcap will focus on increasing the unit test coverage and performance critical operations. The output of Netcap will be compared to other tools, to ensure no data is missed or misinterpreted. Netcap will be extended in future with functionalities like support for extracted features. This framework might be used for experiments on datasets for accurate predictions on network data. Encoding feature vectors could also be implemented as part of the Netcap framework. An interface for adding additional application layer encoders can be added in future. Netcap will be evaluated for monitoring industrial control systems communication. The recently open sourced fingerprinting strategy for SSH handshakes (HASSH) by salesforce could prove beneficial in future. Check the slides of this project from the presentation by Philipp Mieden (the creator of Netcap) at the Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities on Researchgate. Many users are appreciating the efforts taken for this project and eagerly awaiting for the features that might be released in the future. But a few Hacker News users think that the functionality provided by this application is still unclear. The thesis misses a lot of points with the major one being as to how this tool is actually warranted as a whole. The question is as to how will the anomalies of this project get detected? A lot of questions are still unanswered but it would be interesting to see what Philipp comes up with next. https://twitter.com/mythicalcmd/status/1076459582963310593 Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US Netflix adopts Spring Boot as its core Java framework Facebook open-sources PyText, a PyTorch based NLP modeling framework
Read more
  • 0
  • 0
  • 3306

article-image-typescript-3-1-releases-with-typesversions-redirects-mapped-tuple-types
Bhagyashree R
28 Sep 2018
3 min read
Save for later

TypeScript 3.1 releases with typesVersions redirects, mapped tuple types

Bhagyashree R
28 Sep 2018
3 min read
After announcing TypeScript 3.1 RC version last week, Microsoft released TypeScript 3.1 as a stable version, yesterday. This release comes with support for mapped array and tuple types, easier properties on function declarations, typesVersions for version redirects, and more. Support for mapped array and tuple types TypeScript has a concept called ‘mapped object type’ which can generate new types out of existing ones. Instead of introducing a new concept for mapping over a tuple, mapped object types now just “do the right thing” when iterating over tuples and arrays. This means that if you are using the existing mapped types like Partial or Required from lib.d.ts, they will now also automatically work on tuples and arrays. This change will eliminate the need to write a ton of overrides. Properties on function declarations For any function or const declaration that’s initialized with a function, the type-checker will analyze the containing scope to track any added properties. This enables users to write canonical JavaScript code without resorting to namespace hacks. Additionally, this approach for property declarations allows users to express common patterns like defaultProps and propTypes on React stateless function components (SFCs). Introducing typesVersions for version redirects Users are always excited to use new type system features in their programs or definition files. However, for the library maintainers, this creates a difficult situation where they are forced to choose between supporting new TypeScript features and not breaking its older versions. To solve this, TypeScript 3.1 introduces a new feature called typesVersions. When TypeScript opens a package.json file to figure out which files it needs to read, it will first look for the typesVersions field. The field will tell TypeScript to check which version of TypeScript is running. If the version in use is 3.1 or later, it figures out the path you've imported relative to the package and reads from the package's ts3.1 folder. Refactor from .then() to await With this new refactoring, you can now easily convert functions that return promises constructed with chains of .then() and .catch() calls to async functions that uses await. Breaking changes Vendor-specific declarations removed: TypeScript's built-in .d.ts library and other built-in declaration file libraries are partially generated using Web IDL files provided from the WHATWG DOM specification. While this makes keeping lib.d.ts easier, many vendor-specific types have been removed. Differences in narrowing functions: Using the typeof foo === "function" type guard may provide different results when intersecting with relatively questionable union types composed of {}, Object, or unconstrained generics. How to install this latest version? You can get the latest version through NuGet or via npm by running: npm install -g typescript According to their roadmap, TypeScript 3.2 is scheduled to be released in November with strictly-typed call/bind/apply on function types. To read the full list of updates, check their official announcement on MSDN. TypeScript 3.1 RC released TypeScript 3.0 is finally released with ‘improved errors’, editor productivity and more How to work with classes in Typescript
Read more
  • 0
  • 0
  • 3273
Modal Close icon
Modal Close icon