Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-ethereum-2-0-serenity-is-coming-with-better-speed-scalability-and-security-vitalik-buterin-at-devcon
Melisha Dsouza
02 Nov 2018
3 min read
Save for later

Ethereum’s 1000x Scalability Upgrade ‘Serenity’ is coming with better speed and security: Vitalik Buterin at Devcon

Melisha Dsouza
02 Nov 2018
3 min read
Ethereum 2.0 is coming really soon and it could increase the Ethereum network’s capacity to process transactions by a 1000 times. In the annual Ethereum developer conference- Devcon, Vitalik Buterin, the creator of this second largest blockchain announced that the update which was formerly known as Ethereum 2.0 is now called ‘Serenity’. Buterin also addressed the massive efforts that have been put to upgrade the network in the past, especially with issues like the DAO hack and  “super-quadratic sharding” that bogged the team down. What can we expect in Serenity? “We have been actively researching, building, and now, finally getting them all together” -Vitalik Buterin In the month of September, Danner Langley, senior blockchain developer at Rocket Pool revealed the roadmap for Ethereum 2.0. ‘Serenity’ will encompass multiple projects that Ethereum developers have been working on since 2014. It will see Ethereum finally switch from ‘proof-of-work’ to ‘proof-of-stake’. This is a model in which people and organizations holding ether will “stake” their own coins in order to maintain the network. They will earn block rewards for doing so. This will also help to achieve a sharded blockchain verifying data on the network, thus increasing overall efficiency. The new upgrade will also make the network much faster, more secure, less energy-intensive and capable of handling thousands of transactions per second. Serenity will include eWASM, which is a replacement to the existing Ethereum Virtual Machine (EVM) used to compile the smart-contracts. eWASM will double the transaction throughput rate as compared to EVM. He also added that before the official launch of Serenity, developers will make some final tweaks including stabilizing protocol specifications and cross-client testnets. Buterin believes Ethereum will soar with the Serenity upgrade. During the conference, Buterin said that Serenity will be introduced in 4 phases: Phase one will include an initial version with proof-of-stake beacon chain. This would co-exist alongside  Ethereum itself and will allow Casper validators to participate. Phase two will represent simplified version of Serenity with limited features. Excluding smart contracts or money transfers from one shard to another. Phase three will be an amplified version of Serenity with cross-shard communication where users can send funds and messages across different shards. Phase four will have the final tweaks and optimized features Is Vitalik Buterin taking a backseat? In a conversation with MIT Technology Review, Buterin said that  it’s time for him to start fading into the background as “a necessary part of the growth of the community.” Taking the cue from Ethereum being decentralized, where a single component failure could not bring down the whole system, Buterin is  “out of the decision-making in a lot of ways,” said Hudson Jameson of the Ethereum Foundation. This will pave the way for the community to thrive and become more decentralized. Buterin says that his involvement in the project has amounted to “a significantly smaller share of the work than I had two or three years ago,” also adding that downsizing his influence is “something we are definitely making a lot of progress on.” Ethereum’s development will not end with Serenity, since important issues such as transaction fees and governance are still yet to be addressed. Buterin and his team have already begun planning future tweaks along with more tech improvements. To know more about this news, head over to OracleTimes. Aragon 0.6 released on Mainnet allowing Aragon organizations to run on Ethereum Mainnet Vitalik Buterin’s new consensus algorithm to make Ethereum 99% fault tolerant  
Read more
  • 0
  • 0
  • 2620

article-image-an-early-access-to-sailfish-3-is-here
Savia Lobo
02 Nov 2018
3 min read
Save for later

An early access to Sailfish 3 is here!

Savia Lobo
02 Nov 2018
3 min read
This week, Sailfish OS announced the early release of its third generation release i.e Sailfish 3 software and has made it available to all Sailfish users who had opted-in for the early access updates. Sami Pienimäki, CEO & Co-founder of Jolla Ltd, in his release post said, “we are expanding the Sailfish community program, “Sailfish X“, with a few of key additions next week: on November 8 we release the software for various Sony Xperia XA2 models.” Why the name ‘Sailfish’? Sailfish 3.0.0 is named after the legendary National Park Lemmenjoki in Northern Lapland. We’ve always aimed at respecting our Finnish roots in naming our software versions: previously we’ve covered lakes and rivers, and now we’re set to explore our beautiful national parks. Sailfish 3 will be rolled out in phases, and thus many features are deployed in several software releases. The first phase is Sailfish 3.0.0 is available as an early access version since October 31st. The customer release is expected to roll out soon in the coming weeks. Further, the next release 3.0.1 is expected to release in early December. Security and Corporate features of Sailfish 3 Sailfish 3 has a deeper level of security, which makes it a go-to option for various corporate and organizational solutions, and other use cases. Some of the new enhanced features in Sailfish 3 include Mobile Device Management (MDM), fully integrated VPN solutions, enterprise WiFi, data encryption, and better and faster performance. It also offers a full support for regional infrastructures including steady releases & OS upgrades, local hosting, training, and a flexible feature set to support specific customer needs. User experience highlights for Sailfish 3.0.0 New Top Menu: quick settings and shortcuts can now be accessed anywhere Light ambiances: new fresh look for Sailfish OS Data encryption: memory card encryption is now available. Device file system encryption is coming in next releases. New Keyboard gestures: quickly change keyboard layouts with one swipe USB On-The-Go storage: connect to different kinds of external storage devices Camera improvements: new lock screen camera roll allows you to review the photos you just took without unlocking the device Further, due to the rewritten way to launch apps and load views, one can achieve much better UI performance in Sailfish 3. Sami mentions, “You can start to enjoy the faster Sailfish already now with the 3.0.0 release and the upcoming major Qt upgrade will further improve the responsiveness & performance resulting to 50% better overall performance.” To know more about Sailfish 3 in detail, visit its official website. GitHub now allows issue transfer between repositories; a public beta version Introducing Howler.js, a Javascript audio library with full cross-browser support BabyAI: A research platform for grounded language learning with human in the loop, by Yoshua Bengio et al
Read more
  • 0
  • 0
  • 10092

article-image-facebook-launches-horizon-its-first-open-source-reinforcement-learning-platform-for-large-scale-products-and-services
Natasha Mathur
02 Nov 2018
3 min read
Save for later

Facebook launches Horizon, its first open source reinforcement learning platform for large-scale products and services

Natasha Mathur
02 Nov 2018
3 min read
Facebook launched Horizon, its first open source reinforcement learning platform for large-scale products and services, yesterday. The workflows and algorithms in Horizon have been built on open source frameworks such as PyTorch 1.0, Caffe2, and Spark. This is what makes Horizon accessible to anyone who uses RL at scale. “We developed this platform to bridge the gap between RL’s growing impact in research and its traditionally narrow range of uses in production. We deployed Horizon at Facebook over the past year, improving the platform’s ability to adapt RL’s decision-based approach to large-scale applications”, reads the Facebook blog. Facebook has already used this new platform to gain performance benefits such as delivering more relevant notifications, optimizing streaming video bit rates, and improving personalized suggestions in Messenger. However, given the Horizon’s open design and toolset, it will also be benefiting other organizations in RL. Harnessing reinforcement learning for large-scale production Horizon uses reinforcement learning to make decisions at scale by taking into account the issues specific to the production environments. These include feature normalization, distributed training, large-scale deployment, and data sets with thousands of varying feature types. Moreover, as per Facebook, applied RL models are more sensitive to noisy and unnormalized data as compared to the traditional deep networks. This is why Horizon preprocesses these state and action features in parallel with the help of Apache Spark. Once the training data gets preprocessed, PyTorch-based algorithms are used for normalization and training on the graphics processing unit. Also, Horizon’s design focuses mainly on large clusters, where distributed training on many GPUs at once allows engineers to solve the problems with millions of examples. Horizon supports algorithms such as Deep Q-Network (DQN), parametric DQN, and deep deterministic policy gradient (DDPG) models. Then comes the training process in Horizon where a Counterfactual policy evaluation (CPE) is run. CPE refers to a set of methods that are used to predict the performance of a newly learned policy. Once the evaluation is done, its results are logged to TensorBoard. Once the training gets done, Horizon exports the models using ONNX, so that these models can be efficiently served at scale. Now, usually, in many RL domains, the performance of a model is measured by trying it out. However, since Horizon performs large-scale production, it is important to ensure that the test models are tested thoroughly before deploying them at scale. To achieve this, Horizon solves policy optimization tasks, which in turn ensures that the training workflow also automatically runs state-of-the-art policy evaluation techniques. These techniques include sequential doubly robust policy evaluation and MAGIC. The evaluation is then combined with anomaly detection which automatically alerts engineers if a new iteration of the model performs radically different than the previous one before the policy gets deployed to the public. Facebook plans on adding new models & model improvements along with CPE integrated with real metrics to Horizon in the future. “We are leveraging the Horizon platform to discover new techniques in model-based RL and reward shaping, and using the platform to explore a wide range of additional applications at Facebook, such as data center resource allocation and video recommendations. Horizon could transform the way engineers and ML models work together”, says Facebook. For more information, check out the official Facebook blog. Facebook open sources a set of Linux kernel products including BPF, Btrfs, Cgroup2, and others to address production issues Facebook open sources QNNPACK, a library for optimized mobile deep learning Facebook’s Child Grooming Machine Learning system helped remove 8.7 million abusive images of children
Read more
  • 0
  • 0
  • 11044

article-image-zimperium-zlabs-discloses-a-new-critical-vulnerability-in-multiple-high-privileged-android-services-to-google
Natasha Mathur
02 Nov 2018
5 min read
Save for later

Zimperium zLabs discloses a new critical vulnerability in multiple high-privileged Android services to Google

Natasha Mathur
02 Nov 2018
5 min read
Tamir Zahavi-Brunner, Security Researcher at Zimperium zLabs posted the technical details of the vulnerability affecting multiple high-privileged Android devices and its exploit, earlier this week. Brunner had disclosed this vulnerability to Google who then designated it as CVE-2018-9411. As per Brunner, Google claims Project Treble ( introduced as part of Android 8.0 Oreo and that makes updates faster and easier for OEMs to roll out to devices) benefits Android security. However, as per the vulnerability disclosed by Brunner, elements of Project Treble could hamper Android security. “This vulnerability is in a library introduced specifically as part of Project Treble and does not exist in a previous library which does pretty much the same thing. This time, the vulnerability is in a commonly used library, so it affects many high-privileged services”, says Brunner. One of the massive changes that come with Project Treble is the split of many system services. Previously, these system services contained both AOSP (Android Open Source Project) and vendor code. After Project Treble, all of these services were split into one AOSP service and one or more vendor services called HAL services.  This means that data which used to be previously passed in the same process between AOSP and vendor now will have to pass through IPC (enables communication between different Android components) between AOSP and HAL services. Now, most of the IPC in Android goes through Binder (enables a remote procedure calls mechanism between the client and server processes), so Google decided that the new IPC should do so as well. But Google also decided to perform some modifications. They introduced HIDL which is a whole new format for the data passed through Binder IPC (makes use of shared memory to maintain simplicity and good performance). HIDL is supported by a new set of libraries and is dedicated to the new Binder domain for IPC between AOSP and HAL services. HIDL comes with its own new implementation for many types of objects. An important object for sharing memory in HIDL is hidl_memory. Technical details of the Vulnerability The hidl_memory comprises members namely, mHandle (HIDL object which holds file descriptors, mSize (size of the memory to be shared), mName (represents the type of memory). These structures are transferred through Binder in HIDL, where complex objects (like hidl_handle or hidl_string) have their own custom code for writing and reading the data. Transferring structures via 64-bit processes cause no issues, however, this size gets truncated to 32 bit in 32-bit processes, so only the lower 32 bits are used. So if a 32-bit process receives a hidl_memory whose size is bigger than UINT32_MAX (0xFFFFFFFF), the actually mapped memory region will be much smaller. “For instance, for a hidl_memory with a size of 0x100001000, the size of the memory region will only be 0x1000. In this scenario, if the 32-bit process performs bounds checks based on the hidl_memory size, they will hopelessly fail, as they will falsely indicate that the memory region spans over more than the entire memory space. This is the vulnerability!” writes Brunner. After the vulnerability has been tracked, it is time to find a target for the vulnerability. To find the target, an eligible HAL service is needed such as android.hardware.cas, or MediaCasService. MediaCasService allows the apps to decrypt the encrypted data. Exploiting the Vulnerability To exploit the vulnerability, there are two other issues that need to be solved such as finding the address of the shared memory and of other interesting data and making sure that the shared memory gets mapped in the same location each time. The second issue gets solved by looking at the memory maps of the linker in the service memory space. To solve the first issue, the data in the linker_alloc straight after the gap is analyzed, and a shared memory is mapped before a blocked thread stack, which makes it easy to reach the memory relatively through the vulnerability. Hence, instead of only getting one thread to that blocked state, multiple (5) threads are generated, which in turn, causes more threads to be created, and more thread stacks to get allocated. Once this shared memory gets mapped before the blocked thread stack, the vulnerability is used to read two things from the thread stack, the thread stack address, and the address where libc is mapped at to build a ROP chain. The last step is executing this ROP chain. However, Brunner states that the SELinux limitations on this process prevent turning this ROP chain into full arbitrary code execution. “There is no execmem permission, so anonymous memory cannot be mapped as executable, and we have no control over file types which can be mapped as executable”. Now, as the main objective is to obtain the QSEOS version, a code using ROP chain does that. This makes sure that the thread does not crash immediately after running the ROP chain. Then this process is left in a bit of an unstable state. To leave everything in a clean state, service using the vulnerability is crashed (by writing to an unmapped address) in order to let it restart. For complete information, read the official Zimperium blog post. FreeRTOS affected by 13 vulnerabilities in its TCP/IP stack A kernel vulnerability in Apple devices gives access to remote code execution
Read more
  • 0
  • 0
  • 13901

article-image-gnu-bison-3-2-got-rolled-out
Amrata Joshi
01 Nov 2018
2 min read
Save for later

GNU Bison 3.2 got rolled out

Amrata Joshi
01 Nov 2018
2 min read
On Monday, the team at Bison announced the release of GNU Bison 3.2, a general-purpose parser generator. It converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser, employing LALR(1) parser tables. This release is bootstrapped with the following tools, Gettext 0.19.8.1, Autoconf 2.69, Automake 1.16.1, Flex 2.6.4, and Gnulib v0.1-2176-ga79f2a287 GNU Bison, commonly known as Bison, is a parser generator that is part of the GNU Project. It is used to develop a wide range of language parsers, right from those used in simple desk calculators to complex programming languages. One has to be fluent in C or C++ programming in order to use Bison. Bison 3.2  comes with massive improvements to the deterministic C++ skeleton, Lalr1.cc, while maintaining compatibility with C++98. Move-only types can now be used for semantic values while working with Bison’s variants. In modern C++ (C++11 and later), one should always use 'std::move' with the values of the right-hand side symbols ($1, $2, etc.), as they will be popped from the stack anyway. Using 'std::move' is now mandatory for move-only types such as unique_ptr, and it provides a significant speedup for large types such as std::string, or std::vector, etc. A warning will be issued when automove is enabled, and a value is used several times. Major Changes in Bison 3.2 Support for DJGPP (DJ's GNU Programming Platform), which have been unmaintained and untested for years, is now termed obsolete. Unless there is an activity to revive it, it will be removed. To denote the output stream, printers should now use ‘yyo’ instead of ‘yyoutput’. The variant-based symbols in C++ should now use emplace() instead ofbuild(). In C++ parsers, the parser::operator() is now a synonym for the parser::parse. A ‘comment’ in the generated code now emphasizes that users should not depend on non-documented implementation details, such as macros starting with YY_. A new section named "A Simple C++ Example", is now a tutorial for parsers in C++. Bug Fixes in Bison 3.2 Major bug fixes in this release include Portability issues in MinGW and VS2015,  the test suite and with Flex. To know more about this release check out the official mailing list. Mio, a header-only C++11 memory mapping library, released! Google releases Oboe, a C++ library to build high-performance Android audio apps The 5 most popular programming languages in 2018
Read more
  • 0
  • 0
  • 14240

article-image-timescaledb-1-0-officially-released
Amrata Joshi
01 Nov 2018
3 min read
Save for later

TimescaleDB 1.0 officially released

Amrata Joshi
01 Nov 2018
3 min read
On Tuesday, the team at Timescale announced the official production release of TimescaleDB 1.0. Two months ago, the team released its initial release candidate. With the official release, TimescaleDB 1.0 is now the first enterprise-ready time-series database that supports full SQL and scale. This release has crossed over 1M downloads and production deployments at Comcast, Cray, Cree and more. Mike Freedman, Co-founder/CTO at TimescaleDB says, “Since announcing our first release candidate in September, Timescale’s engineering team has merged over 50 PRs to harden the database, improving stability and ease-of-use.” Major updates in TimescaleDB 1.0 TimescaleDB 1.0 comes with a cleaner management of multiple tablespaces that allows hypertables to elastically grow across many disks. Also, the information about the state of hypertables is easily available which includes their dimensions and chunks. It’s important to have a robust cross-operating system availability for better usability. This release brings improvements for supporting Windows, FreeBSD, and NetBSD. TimescaleDB 1.0 powers the foundation for a database scheduling framework that manages background jobs. Since TimescaleDB is implemented as an extension, a single PostgreSQL instance can have multiple, different versions of TimescaleDB running. TimescaleDB 1.0 manages edge cases related to the schema and tablespace modifications. It also provides cleaner permissions for backup/recovery in templated databases and includes additional test coverage. TimescaleDB 1.0 supports Prometheus Prometheus, a leading open source monitoring and alerting tool, is not arbitrarily scalable or durable in the face of disk or node outages. Whereas, TimescaleDB 1.0 is efficient and can easily handle terabytes of data, and supports high availability and replication which it makes it long-term data storage. It also provides advanced capabilities and features, such as full SQL, joins and replication, which are not available in Prometheus. All the metrics recorded in Prometheus are first written to the local node, and then written to TimescaleDB. So, the metrics are immediately backed up, in case of any disk failure on a Prometheus node, would be still safer. What’s the future like? The team at Timescale says that the upcoming releases of TimescaleDB will include more automation around capabilities like automatic data aggregation, retention, and archiving. They will also include automated data management techniques for improving query performance, such as non-blocking reclustering and reindexing of older data. Read more about this release on Timescale’s official website. Introducing TimescaleDB 1.0, the first OS time-series database with full SQL support Cockroach Labs announced managed CockroachDB-as-a-Service PipelineDB 1.0.0, the high performance time-series aggregation for PostgreSQL, released!
Read more
  • 0
  • 0
  • 8821
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-red-hat-released-rhel-7-6
Amrata Joshi
01 Nov 2018
4 min read
Save for later

Red Hat released RHEL 7.6

Amrata Joshi
01 Nov 2018
4 min read
On Tuesday, Red Hat announced the general availability of RHEL (Red Hat Enterprise Linux) 7.6. RHEL 7.6 is a consistent hybrid cloud foundation for enterprise IT. It is built on an open source innovation, designed to enable organizations to match the pace with emerging cloud-native technologies. It also supports IT operations across enterprise IT’s four footprints. Just three months back the beta version of RHEL 7.6 was released. Red Hat Enterprise Linux 7.6  addresses a range of IT challenges, emphasizes security and compliance, management and automation, and Linux container innovations. Features in RHEL 7.6 RHEL 7.6 solves security concerns IT security has always been a key challenge for many IT departments as it does not get easier in complex hybrid and multi-cloud environments. Red Hat Enterprise Linux 7.6 is the answer to this problem as it introduces a Trusted Platform Module (TPM) 2.0 hardware modules as part of Network Bound Disk Encryption (NBDE). NBDE provides security across networked environments whereas, TPM works on-premise to add an additional layer of security, tying disks to specific physical systems. These two layers of security for hybrid cloud operations help keep information on disks physically more secure. RHEL 7.6 also makes it easier to manage firewalls with improvements to nftables, a packet filtering framework. It also simplifies the configuration of counter-intrusion measures. Updated cryptographic algorithms delivered for RSA and elliptic-curve cryptography (ECC) are enabled by default with RHEL 7.6. This helps the organizations handling sensitive information to match their pace with Federal Information Processing Standards (FIPS) compliance and standards bodies like the National Institute of Standards and Technology (NIST). Management and automation get better Red Hat Enterprise Linux 7.6 helps in making Linux adoption easier for the users as it brings enhancements to the Red Hat Enterprise Linux Web Console, which provides a graphical overview of Red Hat system health and status. RHEL 7.6 has made it easier to find updates on the system summary page. It also provides automated configuration of single sign-on for identity management and a firewall control interface. This makes it easier for security administrators. RHEL 7.6 comes with the Extended Berkeley Packet Filter (eBPF), which provides a safer and efficient mechanism for monitoring activities within the kernel. Soon, it will help in enabling additional performance monitoring and network tracing tools. Red Hat Enterprise Linux 7.6 also provides support for Red Hat Enterprise Linux System Roles which is a collection of Ansible modules. These modules are designed to provide a consistent way to automate and remotely manage Red Hat Enterprise Linux deployments. Each of these modules provides a ready-made automated workflow for handling common and complex tasks, involved in Linux environments. This automation helps to remove the possibilities of human error from these tasks.  This, in turn, frees up the IT teams and lets them focus more on adding business value. Red Hat’s lightweight container toolkit Red Hat Enterprise Linux 7.6 supports the rise of cloud-native technologies by introducing Red Hat’s lightweight container toolkit. This toolkit comprises of CRI-O, Buildah, Skopeo, and now Podman. Each of these tools is built on a fully open source and community-backed technologies. They are based on open standards like the Open Container Initiative (OCI) format. Podman complements Buildah and Skopeo while sharing the same foundations as CRI-O. It enables users to run containers and groups of containers (pods) from a familiar command-line interface, which eliminates the need of a daemon. This, in turn, helps to reduce the complexity in container creation while making it easier for developers to build containers on workstations, in continuous integration/continuous development (CI/CD) systems and within high-performance computing (HPC) or big data scheduling systems. For more information on this release, check out Red Hat’s official website Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available 4 reasons IBM bought Red Hat for $34 billion
Read more
  • 0
  • 0
  • 16952

article-image-a-kernel-vulnerability-in-apple-devices-gives-access-to-remote-code-execution
Prasad Ramesh
01 Nov 2018
2 min read
Save for later

A kernel vulnerability in Apple devices gives access to remote code execution

Prasad Ramesh
01 Nov 2018
2 min read
A heap buffer overflow vulnerability was found in Apple’s XNU OS kernels by Kevin Backhouse. An exploit can potentially cause any iOS or macOS device on the same network to reboot, without any user interaction. Apple has classified this kernel vulnerability as a remote code execution (RCE) vulnerability in the kernel. It may be possible to exploit buffer overflow to execute arbitrary code in the kernel. The vulnerability is fixed in iOS 12 and macOS Mojave. The vulnerability is caused by a heap buffer overflow in the networking code within the XNU kernel. XNU is a kernel system developed by Apple. It is used in both iOS and macOS, hence most iPhones, iPads, and Macbooks are affected. An attacker merely needs to send a malicious IP packet the target device’s IP address to trigger this. The vulnerability is triggered only if the attacker is in the same network as the target. This becomes easy if you’re using a free WiFi network from a coffee shop. The vulnerability being in the kernel, anti-viruses cannot protect your device. The attacker can control the size and content of the heap buffer giving a potential to gain remote code execution of a device. There are two known mitigations against this kernel vulnerability: Enabling stealth mode in the macOS firewall prevents the attack from taking place. Don’t use public WiFi networks as there is a high risk of being attacked. These OS versions and devices are vulnerable: All devices with Apple iOS 11 and earlier All Apple macOS High Sierra devices up to 10.13.6. This is patched in security update 2018-001. Devices using Apple macOS Sierra up to 10.12.6. This is patched in security update 2018-005. Apple OS X El Capitan and earlier devices The kernel vulnerability was reported by Kevin Backhouse to Apple in time to be rolled out with iOS 12 and macOS Mojave. The vulnerabilities were announced on October 30. For more details visit the LGMT website. Final release for macOS Mojave is here with new features, security changes and a privacy flaw The kernel community attempting to make Linux more secure Apple has introduced Shortcuts for iOS 12 to automate your everyday tasks
Read more
  • 0
  • 0
  • 14365

article-image-intel-optane-dc-persistent-memory-available-first-on-google-cloud
Melisha Dsouza
01 Nov 2018
2 min read
Save for later

Intel Optane DC Persistent Memory available first on Google Cloud

Melisha Dsouza
01 Nov 2018
2 min read
On 30th October, Google announced in a blog post the alpha availability of virtual machines with 7TB of total memory utilizing Intel Optane DC persistent memory. The partnership of Google, SAP and Intel announced in July empowers users to handle and store large amounts of data, and run in-memory databases such as SAP HANA. Now, with the availability of Intel Optane DC Persistent Memory on Google cloud, GCP customers will have the ability to scale up their workloads while benefiting from all the infrastructure capabilities and flexibility of Google Cloud. Features of Intel Optane DC Persistent Memory Intel Optane DC persistent memory has two special operating modes - App Direct mode and Memory mode. The  App Direct mode allows applications to receive the full value of the product’s native persistence and larger capacity. In Memory mode, applications running in a supported operating system or virtual environment can use the persistent memory as volatile memory, while utilizing an increase in system capacity made possible from module sizes up to 512 GB without rewriting software. Unlike traditional DRAM, Intel Optane DC persistent memory offers high-capacity, affordability, and persistence. Systems deploying this technology will result in improved analytics, database and in-memory database, artificial intelligence, high-capacity virtual machines and containers, and content delivery networks. The technology reduces in-memory database restart times from days or hours to minutes or seconds and expands system memory capacity. Google also stated that early customers have almost a 12x improvement in SAP HANA startup times using Intel Optane DC Persistent Memory.  Alibaba, Cisco, Dell EMC, Fujitsu, Hewlett Packard Enterprise are some of the many companies to announced beta services and systems for early customer trials and deployments of this technology. The search engine giant has hinted at a larger Optane-based VM offering to follow in 2019. To know more about this news, head over to Google Cloud’s official Blog post. What’s new in Google Cloud Functions serverless platform Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud Google Cloud Next: Fei-Fei Li reveals new AI tools for developers
Read more
  • 0
  • 0
  • 15603

article-image-github-now-allows-issue-transfer-between-repositories-a-public-beta-version
Savia Lobo
01 Nov 2018
3 min read
Save for later

GitHub now allows issue transfer between repositories; a public beta version

Savia Lobo
01 Nov 2018
3 min read
Yesterday, GitHub announced that repository admins can now transfer issues from one repository to another better fitting repository, to help those issues find their home. This project by GitHub is currently is in public beta version. Nat Friedman, CEO of GitHub, in his tweet said, “We've just shipped the ability to transfer an issue from one repo to another. This is one of the most-requested GitHub features. Feels good!” When the user transfers an issue, the comments, assignees, and issue timeline events are retained. The issue's labels, projects, and milestones are not retained, although users can see past activity in the issue's timeline. People or teams who are mentioned in the issue will receive a notification letting them know that the issue has been transferred to a new repository. The original URL redirects to the new issue's URL. People who don't have read permissions in the new repository will see a banner letting them know that the issue has been transferred to a new repository that they can't access. Permission levels for issue transfer between repositories People with an owner or team maintainer roles can manage repository access with teams. Each team can have different repository access permissions. There are three types of repository permissions, i.e. Read, Write, and Admin, available for people or teams collaborating on repositories that belong to an organization. To transfer an open issue to another repository, the user needs to have admin permissions on the repository the issue is in and the repository where the issue is to be transferred. If the issue is being transferred from a repository that's owned by an organization, you are a member of, you must transfer it to another repository within your organization. To know more about the repository permission levels visit GitHubHelp blog post. Steps to transfer an Open issue to another repository On GitHub, navigate to the main page of the repository. Under your repository name, click  Issues. In the list of issues, click the issue you'd like to transfer. In the right sidebar, click Transfer this issue. 5. In "Choose a repository," select the repository you want to transfer the issue to. 6. Click Transfer issue. GitHub Business Cloud is now FedRAMP authorized GitHub updates developers and policymakers on EU copyright Directive at Brussels GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage    
Read more
  • 0
  • 0
  • 26003
article-image-neuron-an-all-inclusive-data-science-extension-for-visual-studio
Prasad Ramesh
01 Nov 2018
3 min read
Save for later

Neuron: An all-inclusive data science extension for Visual Studio

Prasad Ramesh
01 Nov 2018
3 min read
A team of students from the Imperial College London developed a new Visual Studio extension called neuron. It is aimed to be an all-inclusive add-on for data science tasks in Visual Studio. Using neuron is pretty simple. You begin with regular Python or R code file in a window. Beside the code is neuron’s windows as shown in the following screenshot. It takes up half of the screen but is a blank page at the start. When you run your code snippets, the output starts showing up as interactive cards. Neuron can display outputs that are plain text, tables, images, graphs, or maps. Source: Microsoft Blog You can find neuron at the Visual Studio Marketplace. On installation, a button will be visible when you have a supported file open. Neuron uses the Jupyter Notebook in the background. Jupyter Notebook would already be installed in your computer considering it popularity, if not you will be prompted. Neron supports more output types than Jupyter Notebook. You can also generate 3D graphs, maps, LaTeX formulas, markdown, HTML, and static images with neuron. The output is displayed in a card on the right-hand side, it can be resized moved around or expanded into a separate window. Neuron also keeps a track of code snippets associated with each card. Why was neuron created? Data scientists come from various backgrounds and use a set of standard tools like Python, libraries, and the Jupyter Notebook. Microsoft approached the students from the Imperial College London to integrate the various set of tools into one single workspace. A single workspace being a Visual Studio extension that could enable users to run data analysis operations without breaking the current workflow. Neuron gets the advantage of an intelligent IDE, Visual Studio along with rapid execution and visualization of Jupyter Notebook all in a single window. It is not a new idea Although neuron is not a new idea. https://twitter.com/jordi_aranda/status/1057712899542654976 Comments on Reddit also suggest there are existing such tools in other IDEs. Reddit user kazi1 stated: “Seems more or less the same as Microsoft's current Jupyter extension (which is pretty meh). This seems like it's trying to reproduce the work already done by Atom's Hydrogen extension, why not contribute there instead." Another Redditor named procedural_ape said: “This looks like an awesome extension but shame on Microsoft for acting like this is their own fresh, new idea. Spyder has had this functionality for a while.” For more details, visit the Microsoft Blog and a demo is available on GitHub. Visual Studio code July 2018 release, version 1.26 is out! MIT plans to invest $1 billion in a new College of computing that will serve as an interdisciplinary hub for computer science, AI, data science Microsoft releases the Python Language Server in Visual Studio
Read more
  • 0
  • 0
  • 12640

article-image-google-now-requires-you-to-enable-javascript-to-sign-in-as-part-of-its-enhanced-security-features
Melisha Dsouza
01 Nov 2018
3 min read
Save for later

Google now requires you to enable JavaScript to sign-in as part of its enhanced security features

Melisha Dsouza
01 Nov 2018
3 min read
“Online security can sometimes feel like walking through a haunted house - scary, and you aren’t quite sure what may pop up” Jonathan Skelker, product manager at Google   October 31st marked the end of ‘Cybersecurity awareness month’ and Google has made sure to leave its mark on the very last day. Introducing a host of features to protect users account from being compromised, Google has come up with checkpoints before a user signs in, as soon as they are in their account and when users share information with other apps and sites. Let’s walk through all these features in detail. #1 Before you sign in- Enable Javascript on the Browser A mandatory requirement for signing into Google now is that JavaScript should be enabled on the Google sign-in page. When a user enters their credentials on Google’s sign-in page, a risk assessment will be run automatically to block any nefarious activity. It will only allow the sign-in if nothing looks suspicious. The post mentions that "JavaScript is already enabled in your browser; it helps power lots of the websites people use everyday. But, because it may save bandwidth or help pages load more quickly, a tiny minority of our users (0.1%) choose to keep it off" Here is what one user had to say: Source: y combinator #2 Security checkup for protection once signed in After the major update introduced to the Security Checkup last year, Google has gone a step forward to protect users against harmful apps based on recommendations from Google Play Protect. The web dashboard helps users set up two-factor authentication to check which apps have access to users’ account information, and review unusual security events. They also provide information on how to remove accounts from devices users no longer use. Google’s is introducing additional notifications which will send personalized alerts whenever any data is shared from a Google account with third-party sites or applications (including  Gmail info, sharing a Google Photos album, or Google Contacts). This looks like a step in the right direction especially after a recent Oxford University study revealed that more than 90% apps on the Google Play store had third party trackers, leaking sensitive data to top tech companies. #3 Help issued when a user account is compromised The most notable of all the security features is a new, step-by-step process within a users Google Account that will be automatically triggered if the team detects potential unauthorized activity. The 4 steps that will run in the event of a security breach includes: Verify critical security settings to check that a user’s account isn’t vulnerable to any other additional attacks by other means, like a recovery phone number or email address. Secure other user accounts taking into consideration that a user’s Google Account might be a gateway to accounts on other services and a hijacking can leave them vulnerable as well. Check financial activity to see if any payment methods connected to a user’s accounts were abused. Review content and files to see if any of a user’s Gmail or Drive data was accessed or misused. Head over to Google’s official Blog to read more about this news. Google’s #MeToo underbelly exposed by NYT; Pichai assures they take a hard line on inappropriate conduct by people in positions of authority Google employees plan a walkout to protest against the company’s response to recent reports of sexual misconduct A multimillion-dollar ad fraud scheme that secretly tracked user affected millions of Android phones. This is how Google is tackling it.  
Read more
  • 0
  • 0
  • 10921

article-image-introducing-howler-js-a-javascript-audio-library-with-full-cross-browser-support
Bhagyashree R
01 Nov 2018
2 min read
Save for later

Introducing Howler.js, a Javascript audio library with full cross-browser support

Bhagyashree R
01 Nov 2018
2 min read
Developed by GoldFire Studios, Howler.js is an audio library for the modern web that makes working with audio in JavaScript easy and reliable across all platforms. It defaults to Web Audio API and falls back to HTML5 Audio to provide support for all browsers and platforms including IE9 and Cordova. Originally, it was developed for an HTML5 game engine, but it can also be used just as well for any other audio related function in web applications. Features of Howler.js Single API for all audio needs: It provides a simple and consistent API to make it easier to build audio experiences in your application. Audio sprites: For more precise playback and lower resources. you can define and control segments of files with audio sprites. Supports all codecs: It supports all codecs such as MP3, MPEG, OPUS, OGG, OGA, WAV, AAC, CAF, M4A, MP4, WEBA, WEBM, DOLBY, FLAC. Auto-caching for improved performance: It automatically caches loaded sounds that can be reused on subsequent calls for better performance and bandwidth. Modular architecture: Its modular architecture helps you to easily use and extend the library to add custom features. Which browsers does it support? Howler.js is compatible with the following: Google Chrome 7.0+ Internet Explorer 9.0+ Firefox 4.0+ Safari 5.1.4+ Mobile Safari 6.0+ Opera 12.0+ Microsoft Edge Read more about Howler.js on its official website and also check out its GitHub repository. npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out The Ember project releases version 3.5 of Ember.js, Ember Data, and Ember CLI
Read more
  • 0
  • 0
  • 21153
article-image-learn-java-programming-and-donate-to-charity-with-packt-and-humble-bundle
Richard Gall
31 Oct 2018
2 min read
Save for later

Learn Java programming and donate to charity with Packt and Humble Bundle

Richard Gall
31 Oct 2018
2 min read
Packt and Humble Bundle have teamed up once again to bring readers a massive selection of eBooks and videos for incredible prices. Worth a total of $1,712, customers can buy everything for as little as $15 - that's a whole lot of Java learning resources for the price of lunch. Click here to begin exploring November's Java Humble Bundle. Like every Humble Bundle offer, it isn't just about the products. Customers can also donate money to some incredible charities. This month, Packt and Humble Bundle are supporting GameChanger, an organization that aims to improve the lives of children with life-threatening illnesses and their families through gaming. When you buy your bundle you can choose how much money you donate, and where it goes - to the charity, Packt, and Humble Bundle. The offer ends on Monday 12 November. https://www.youtube.com/watch?v=2URlzYasRl0 What you can get For just $1 you can get your hands on... Spring Security Learning RxJava Learn Algorithms and Data Structures in Java for Day-to-Day Applications [Video]] Java 9 Concurrency Cookbook, Second Edition $1 Master Java Web Services and REST API with Spring Boot [Video] For as little as $8 you can get all of the above as well as... Learning Java EE 8 [Video] Building Web Apps with Spring 5 and Angular Java EE 8 Application Development Cloud-Native Applications in Java Java 9 Programming By Example Java EE 8 Cookbook Java Projects Java for beginners: Step-by-step hands-on guide to Java [Video] And for as little as $15 you can get all of that, as well as... Java EE 8 Microservices [Video] Learning Spring Boot 2.0 Mastering Java Machine Learning Architecting Modern Java EE Applications Java EE 8 and Angular Spring 5 Design Patterns Java 11 Cookbook Learning Java by building Android Games Mastering Java 11 Learn Spring Boot in 100 Steps - Beginner to Expert [Video] Master Microservices with Spring Boot and Spring Cloud [Video]
Read more
  • 0
  • 0
  • 15752

article-image-tech-stocks-had-a-bad-month-in-october
Savia Lobo
31 Oct 2018
2 min read
Save for later

Tech stocks had a bad month in October

Savia Lobo
31 Oct 2018
2 min read
The stock market for top tech companies is in shambles. Amazon.com Inc (AMZN.O) and Google parent Alphabet Inc (GOOGL.O) have suffered a battering over the last month on Wall Street after being on top for the past year. Amazon’s stock is down 23% in October alone. Shares of Facebook closed on Tuesday up 2.9 percent at $fv146.22. Monday was a tumultuous day for tech stocks broadly as the news that IBM agreed to buy cloud software distributor Red Hat for $34 billion, a 63 percent premium shocked everyone. Red Hat surged on the news, while IBM was down 4.1 percent. Facebook, Amazon shares have been affected badly which is a stark contrast to the past four years. Baidu, Alibaba and Tencent (B-A-T) is down! A blog post written by Michael. K. Spencer, a blockchain consultant, compares the stocks of top tech companies, with the current stock market trends of the world. Particularly, China’s stocks that have been on the fall, thanks to the trade war between America and China. It makes for an interesting read! China's largest technology companies Baidu, Alibaba and Tencent; also known as the BATs have lost around $165 billion in value year-to-date. U.S.-listed Alibaba and Baidu are caught up in the broader sell-off in Chinese stocks because of the U.S.-China trade war. Tencent, on the other hand, is down because the Chinese government has raised concerns about eye problems citing video games as one of the causes. Tencent, that makes a huge amount of money from games has taken a hit because of this decision. Amazon stocks surge past $2000, expect Amazon to join Apple in the $1 trillion market cap club anytime now Why Wall Street unfriended Facebook: Stocks fell $120 billion in market value after Q2 2018 earnings call How to develop a stock price predictive model using Reinforcement Learning and TensorFlow
Read more
  • 0
  • 0
  • 1138
Modal Close icon
Modal Close icon