Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7007 Articles
article-image-create-a-personal-portfolio-website-with-javascript-and-chatgpt
Maaike van Putten
04 Jun 2023
9 min read
Save for later

Create a Personal Portfolio Website with JavaScript and ChatGPT

Maaike van Putten
04 Jun 2023
9 min read
This article is the first part of a series of articles, please refer to Part 2  for learning how to add a Chatbot to the portfolio website you create in this article!Creating a personal portfolio is a great way for showcasing your skills and accomplishments as a developer or designer. Does that sound like a lot of work? Well… it doesn’t have to be. We can use ChatGPT to generate code snippets and obtain a lot of guidance throughout the process. This way you can build an impressive portfolio website with minimal effort.Here’s what you can do in around 10 prompts: Fig 1: Homepage Fig 2: Portfolio PageFig 3: Contact Page Not bad, right? And it even contains some features: The search functionality works and filters projects based on what you are typing.The testimonials and projects are not hard-coded, but dynamically populated with JavaScript (but not connected to a backend with a database, so for the purpose of this article, they are hard-coded there).Of course, this personal portfolio would need more content. And you could definitely use ChatGPT to generate a bit more content and descriptions for it. You should be adding in some professional pictures and images to complete your portfolio. However, this is a project that you can do in half a day to a day using ChatGPT. The prompt magic The prompts used to create the personal portfolio follow a few best practices that I’ll discuss later. In the overview below, asking to continue after cutting off a response is not included. If you’d like to see all the responses and the versions of the personal portfolio after each step, you can check out this GitHub repo: https://github.com/BrightBoost/brightboost-portfolio-chatgpt Here is the first prompt:I'm making a portfolio site, using HTML/CSS/JS. Can you help me write the initial framework for it?  I need a home page, in which I will introduce me, my passions, aspirations and study background. I also need a call-to-action directing visitors to explore my work.Then I need a portfolio page, in which I showcase my best projects. This will include case studies, and testimonials highlighting my contributions and accomplishments.Finally I need a contact page, which is a minimalist form and social media links in order to facilitate professional connections. And the second:That seems to be close to what I want. However, could you split these into 3 files? One for each page.The third, as you can see, a lot of the heavy lifting in terms of content is done in the prompt here:Can you populate the pages with the following information?Name: Olivia MartinezStudy: Olivia recently graduated from the University of California, Berkeley with a Bachelor's degree in Computer Science.Aspirations: Olivia aspires to become a full-stack software engineer at a tech startup that focuses on environmental sustainability. She hopes to contribute her skills to develop innovative solutions for pressing environmental issues.Hobbies: In her free time, Olivia enjoys hiking, painting, and exploring the latest advancements in machine learning.Example Open Source Projects:- GreenRoute: A web application that optimizes travel routes based on carbon emissions.- EcoClicker: A browser extension that encourages users to reduce their digital carbon footprint.Additional Personal Details: Olivia has been an active volunteer at her local recycling center, where she has helped develop a digital platform to streamline recycling processes. This is what it looked like after this prompt:Fig 4: Homepage after initial promptsFig 5: Portfolio page after promptFig 6: Contact Page after promptThe fourth prompt was quite a challenge and it required going back and forward a bit and testing it until it was good. It was tempting to just modify it, but ChatGPT was supposed to create it here and it did eventually:Can you help me modify the following snippet? ```html      <h2>Portfolio</h2>      <div class="project">        <h3>GreenRoute</h3>        <p>A web application that optimizes travel routes based            on carbon emissions.</p>        <a href="#" class="project-link">View Case Study</a>        <div class="testimonials">          <p>Testimonial 1</p>          <p>Testimonial 2</p>        </div>      </div>       <div class="project">        <h3>EcoClicker</h3>        <p>A browser extension that encourages users to reduce            their digital carbon footprint.</p>        <a href="#" class="project-link">View Case Study</a>        <div class="testimonials">          <p>Testimonial 1</p>          <p>Testimonial 2</p>        </div>      </div>    ``` I'm not satisfied with the look. Could you make the following changes: - Each project is displayed in a card.- The project link looks like a button, in the bottom right.- The title is underlined, and a bit larger.- The page shows 2 columns of cards. Fig 7: Home page after refined promptingAnd here’s the fifth: I need to make sure that footer is always at the bottom of the page, can you provide a CSS snippet to make that work?This also needed second attempt because it wasn’t working. Don’t just say that it doesn’t work, but be specific:It doesn't seem to work. The page only uses about 50% of the screen, so the footer is still in the middle. After this, it looks like: Fig 8: Homepage after footer promptsThis is where things really got cool, but this needed a few tweaks in terms of output. Here was the first prompt to add JavaScript: I'd like to make the portfolio a bit more extendable. Can you write some JavaScript code that generates the portfolio page using an array of objects? For now just put the content directly in code. I forgot a few classes, so let’s prompt again: This works, but you've excluded the classes used in the CSS. As a reminder, this is how a single item should look:** code of the prompt omitted And after this it was good: It seems the 2 column layout is gone. I think this:```html<section id="portfolio"><div class="container" id="portfolio-container"></div></section>```Should contain an element with the class `project-grid` somewhere, which should create a grid. Can you modify the snippet? The last part was on the search bar, which required this prompt:I'd like to add a search bar to the portfolio page. It must search for the text in the title and body. I only want to look for the exact text. After each character it should update the list, filtering out any project that does not match the search text. Then there should be a button to clear the search bar, and show all projects. Can you add this to the JavaScript file? And that’s it! Of course, there are many ways to do this, but this is one way of how you can use ChatGPT to create a personal portfolio. Let’s see some best practices for your ChatGPT prompts, to help you with using it to create your personal portfolio.Best practices for ChatGPT prompts There are some best practices I figured out when working with ChatGPT. Let’s go over them before seeing the prompts used for the personal portfolio.Be specific and clear: Make sure your prompt leaves little room for interpretation. For example, the prompt:Help me with a grid layout.Is not going to help you as much as:For this piece of HTML containing bootstrap cards provide a CSS snippet for a responsive 3-column grid layout with a 20px gap between columns: ** insert your HTML snippet here **Include relevant context and background information: Give the AI enough information to understand the problem or task and help you to its best ability. Don’t ask:How do I convert a date string to a Date object?But ask:  I have a JSON object with date and value properties. How do I convert the date property to a JavaScript Date object?Ask one question at a time: Keep your prompts focused and avoid asking multiple questions in one prompt.Make sure ChatGPT completes its answer before asking the next question: Sometimes it cuts off the result. You can ask it to continue and it will. That’s harder when you’re further down the line.Test the result after every step: Related to the previous tip, but make sure to test the result after every step. This way you can provide feedback on the outcome and it can easily adjust still. Step? Yes! Break down big projects into smaller tasks: Divide your project into manageable steps, and ask the AI to complete each task separately.Bonus tip: You can even ask ChatGPT for help on how to break your project into smaller tasks and make these tasks very detailed. Then go ahead and ask it to do one task at a time.The good news is these tips are actually great interaction tips with humans as well! I bet you’d like to see some of the prompts used to create the personal portfolio, so let’s dive in. Author BioMaaike van Putten is an experienced software developer and Pluralsight, LinkedIn Learning, Udemy, and Bright Boost instructor. She has a passion for software development and helping others get to the next level in their career.You can follow Maaike on:LinkedInTraining Courses
Read more
  • 0
  • 0
  • 9120

article-image-using-chatgpt-with-text-to-speech
Denis Rothman
04 Jun 2023
7 min read
Save for later

Using ChatGPT with Text to Speech

Denis Rothman
04 Jun 2023
7 min read
This article provides a quick guide to using the OpenAI API to jump-start ChatGPT. The guide includes instructions on how to use a microphone to speak to ChatGPT and how to create a ChatGPT request with variables. Additionally, the article explains how to use Google gTTS, a text-to-speech tool, to listen to ChatGPT's response. By following these steps, you can have a more interactive experience with ChatGPT and make use of its advanced natural language processing capabilities. We’re using the GPT-3.5-Turbo architecture in this example. We are also running the examples within Google Colab, but they should be applicable to other environments. In this article, we’ll cover: Installing OpenAI, your API key, and Google gTTS for Text-to-SpeechGenerating content with ChatGPTSpeech-to-text ChatGPT's responseTranscribing with WhisperTo understand GPT-3 Transformers in detail, read Transformers for NLP, 2nd Edition 1. Installing OpenAI, gTTS, and your API Key There are a few libraries that we’ll need to install into Colab for this project. We’ll install them as required, starting with OpenAI. Installing and Importing OpenAI To start using OpenAI's APIs and tools, we'll need to install the OpenAI Python package and import it into your project. To do this, you can use pip, a package manager for Python. First, make sure you have pip installed on your system. !pip install --upgrade pipNext, run the following script in your notebook to install the OpenAI package. It should come pre-installed in Colab:#Importing openaitry:import openaiexcept:!pip install openaiimport openai Installing gTTS Next, install Google gTTS a Python library that provides an easy-to-use interface for text-to-speech synthesis using the Google Text-to-Speech API:#Importing gTTStry:from gtts import gTTSexcept:!pip install gTTS   from gtts import gTTS API Key Finally, import your API key. Rather than enter your key directly into your notebook, I recommend keeping it in a local file and importing it from your script. You will need to provide the correct path and filename in the code below.from google.colab import drivedrive.mount('/content/drive')f = open("drive/MyDrive/files/api_key.txt", "r")API_KEY=f.readline()f.close()#The OpenAI Keyimport osos.environ['OPENAI_API_KEY'] =API_KEYopenai.api_key = os.getenv("OPENAI_API_KEY") 2. Generating Content Let’s look at how to pass prompts into the OpenAI API to generate responses. Speech to text When it comes to speech recognition, Windows provides built-in speech-to-text functionality. However, third-party speech-to-text modules are also available, offering features such as multiple language support, speaker identification, and audio transcription. For simple speech-to-text, this notebook uses the built-in functionality in Windows. Press Windows key + H to bring up the Windows speech interface. You can read the documentation for more information.Note: For this notebook, press Enter when you have finished asking for a request in Colab. You could also adapt the function in your application with a timed input function that automatically sends a request after a certain amount of time has elapsed. Preparing the Prompt Note: you can create variables for each part of the OpenAI messages object. This object contains all the information needed to generate a response from ChatGPT, including the text prompt, the model ID, and the API key. By creating variables for each part of the object, you can make it easier to generate requests and responses programmatically. For example, you could create a prompt variable that contains the text prompt for generating a response. You could also create variables for the model ID and API key, making it easier to switch between different OpenAI models or accounts as needed.For more on implementing each part of the messages object, take a look at: Prompt_Engineering_as_an_alternative_to_fine_tuning.ipynb.Here’s the code for accepting the prompt and passing the request to OpenAI:#Speech to text. Use OS speech-to-text app. For example,   Windows: press Windows Key + H def prepare_message():#enter the request with a microphone or type it if you wish  # example: "Where is Tahiti located?"  print("Enter a request and press ENTER:")  uinput = input("")  #preparing the prompt for OpenAI   role="user"  #prompt="Where is Tahiti located?" #maintenance or if you do not want to use a microphone  line = {"role": role, "content": uinput}  #creating the message   assert1={"role": "system", "content": "You are a helpful assistant."}  assert2={"role": "assistant", "content": "Geography is an important topic if you are going on a once in a lifetime trip."}  assert3=line  iprompt = []  iprompt.append(assert1)  iprompt.append(assert2)  iprompt.append(assert3)  return iprompt#run the cell to start/continue a dialogiprompt=prepare_message() #preparing the messages for ChatGPTresponse=openai.ChatCompletion.create(model="gpt-3.5-turbo",messages=iprompt) #ChatGPT dialogtext=response["choices"][0]["message"]["content"] #response in JSONprint("ChatGPT response:",text) Here's a sample of the output: Enter a request and press ENTER:Where is Tahiti locatedChatGPT response: Tahiti is located in the South Pacific Ocean, specifically in French Polynesia. It is part of a group of islands called the Society Islands and is located approximately 4,000 kilometers (2,500 miles) south of Hawaii and 7,850 kilometers (4,880 miles) east of Australia. 3. Speech-to-text the response GTTS and IPython Once you've generated a response from ChatGPT using the OpenAI package, the next step is to convert the text into speech using gTTS (Google Text-to-Speech) and play it back using  IPython audio.from gtts import gTTSfrom IPython.display import Audiotts = gTTS(text)tts.save('1.wav')sound_file = '1.wav'Audio(sound_file, autoplay=True) 4. Transcribing with Whisper If your project requires the transcription of audio files, you can use OpenAI’s Whisper.First, we’ll install the ffmpeg audio processing library. ffmpeg is a popular open-source software suite for handling multimedia data, including audio and video files:!pip install ffmpegNext, we’ll install Whisper:!pip install git+https://github.com/openai/whisper.git With that done, we can use a simple command to transcribe the WAV file and store it as a JSON file with the same name:!whisper  1.wavYou’ll see Whisper transcribe the file in chunks:[00:00.000 --> 00:06.360]  Tahiti is located in the South Pacific Ocean, specifically in the archipelago of society[00:06.360 --> 00:09.800]  islands and is part of French Polynesia.[00:09.800 --> 00:22.360]  It is approximately 4,000 miles, 6,400 km, south of Hawaii and 5,700 miles, 9,200 km,[00:22.360 --> 00:24.640]  west of Santiago, Chile.Once that’s done, we can read the JSON file and display the text object:import json with open('1.json') as f:     data = json.load(f) text = data['text'] print(text)This gives the following output:Tahiti is located in the South Pacific Ocean, specifically in the archipelago of society islands and is part of French Polynesia. It is approximately 4,000 miles, 6,400 km, south of Hawaii and 5,700 miles, 9,200 km, west of Santiago, Chile. By using Whisper in combination with ChatGPT and gTTS, you can create a fully featured AI-powered application that enables users to interact with your system using natural language inputs and receive audio responses. This might be useful for applications that involve transcribing meetings, conferences, or other audio files. About the Author Denis Rothman graduated from Sorbonne University and Paris-Diderot University, designing one of the very first word2matrix patented embedding and patented AI conversational agents. He began his career authoring one of the first AI cognitive natural language processing (NLP) chatbots applied as an automated language teacher for Moet et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an advanced planning and scheduling (APS) solution used worldwide.You can follow Denis on LinkedIn:  https://www.linkedin.com/in/denis-rothman-0b034043/Copyright 2023 Denis Rothman, MIT License
Read more
  • 0
  • 0
  • 18392

article-image-generating-data-descriptions-with-openai-chatgpt
Greg Beaumont
02 Jun 2023
5 min read
Save for later

Generating Data Descriptions with OpenAI ChatGPT

Greg Beaumont
02 Jun 2023
5 min read
This article is an excerpt from the book, Machine Learning with Microsoft Power BI, by Greg Beaumont. This book is designed for data scientists and BI professionals seeking to improve their existing solutions and workloads using AI.Data description generation plays a vital role in understanding complex datasets, but it can be a time-consuming task. Enter ChatGPT, an advanced AI language model developed by OpenAI. Trained on extensive text data, ChatGPT demonstrates impressive capabilities in understanding and generating human-like responses. In this article, we explore how ChatGPT can revolutionize data analysis by expediting the creation of accurate and coherent data descriptions. We delve into its training process, architecture, and potential applications in fields like research, journalism, and business analytics. While acknowledging limitations, we unveil the transformative potential of ChatGPT for data interpretation and knowledge dissemination. Our first step will be to identify a suitable use case for leveraging the power of GPT models to generate descriptions of elements of FAA Wildlife Strike data. Our objective is to unlock the potential of external data by creating prompts for GPT models that can provide detailed information and insights about the data we are working with. Through this use case, we will explore the value that GPT models can bring to the table when it comes to data analysis and interpretation.For example, a description of the FAA Wildlife Strike Database by ChatGPT might look like this: Figure 1 – OpenAI ChatGPT description of FAA Wildlife Strike Database Within your solution using the FAA Wildlife Strike database, you have data that could be tied to external data using the GPT models. A few examples include additional information about:AirportsFAA RegionsFlight OperatorsAircraftAircraft EnginesAnimal SpeciesTime of Year When the scoring process for a large number of separate rows in a dataset is automated, we can use a GPT model to generate descriptive text for each individual row. It is worth noting that ChatGPT's approach varies from this, as it operates as a chatbot that calls upon different GPT models and integrates past conversations into future answers. Despite the differences in how GPT models will be used in the solution, ChatGPT can still serve as a valuable tool for testing various use cases.When using GPT models, the natural language prompts that are used to ask questions and give instructions will impact the context of the generated text. Prompt engineering is a topic that has surged in popularity for OpenAI and LLMs. The following prompts will provide different answers when using “dogs” as a topic for a GPT query:Tell me about dogs:From the perspective of an evolutionary biologist, tell me about dogs:Tell me the history of dogs:At a third-grade level, tell me about dogs:When planning for your use of OpenAI on large volumes of data, you should test and evaluate your prompt engineering strategy. For this book, the use cases will be kept simple since the goal is to teach tool integration with Power BI. Prompt engineering expertise will probably be the topic of many books and blogs this year. You can test different requests for a description of an FAA Region in the data: Figure 2 – Testing the utility of describing an FAA Region using OpenAI ChatGPT You can also combine different data elements for a more detailed description. The following example combines data fields with a question to ask “Tell me about the Species in State in Month”: Figure 3 – Using ChatGPT to test a combination of data about Species, State, and Month There are many different options to consider. To combine a few fields of data and provide useful context about the data, you decide to plan a use case for describing the aircraft and operator. An example can be tested with the following formula in OpenAI ChatGPT such as “Tell me about the airplane model Aircraft operated by the Operator in three sentences." Here is an example using data from a single row of the FAA Wildlife Strike database: Figure 4 – Information about an airplane in the fleet of an operator as described by OpenAI ChatGPT From a prompt engineering perspective, asking this question for multiple reports in the FAA Wildlife Strike database would require running the following natural language query on each row of data (column names are depicted with brackets): Tell me about the airplane model [Aircraft] operated by [Operator] in three sentences:SummaryThis article explores how ChatGPT expedites the generation of accurate and coherent data descriptions. Unveiling its training process, architecture, and applications in research, journalism, and business analytics, we showcase how ChatGPT revolutionizes data interpretation and knowledge dissemination. Acknowledging limitations, we highlight the transformative power of this AI technology in enhancing data analysis and decision-making. Author BioGreg Beaumont is a Data Architect at Microsoft; Greg is an expert in solving complex problems and creating value for customers. With a focus on the healthcare industry, Greg works closely with customers to plan enterprise analytics strategies, evaluate new tools and products, conduct training sessions and hackathons, and architect solutions that improve the quality of care and reduce costs. With years of experience in data architecture and a passion for innovation, Greg has a unique ability to identify and solve complex challenges. He is a trusted advisor to his customers and is always seeking new ways to drive progress and help organizations thrive. For more than 15 years, Greg has worked with healthcare customers who strive to improve patient outcomes and find opportunities for efficiencies. He is a veteran of the Microsoft data speaker network and has worked with hundreds of customers on their data management and analytics strategies.You can follow Greg on his LinkedIn 
Read more
  • 0
  • 0
  • 8833

article-image-summarizing-data-with-openai-chatgpt
Greg Beaumont
02 Jun 2023
4 min read
Save for later

Summarizing Data with OpenAI ChatGPT

Greg Beaumont
02 Jun 2023
4 min read
This article is an excerpt from the book, Machine Learning with Microsoft Power BI, by Greg Beaumont. This book is designed for data scientists and BI professionals seeking to improve their existing solutions and workloads using AI. In the ever-expanding landscape of data analysis, the ability to summarize vast amounts of information concisely and accurately is invaluable. Enter ChatGPT, an advanced AI language model developed by OpenAI. In this article, we delve into the realm of data summarization with ChatGPT, exploring how this powerful tool can revolutionize the process of distilling complex datasets into concise and informative summaries.Numerous databases feature free text fields that comprise entries from a diverse array of sources, including survey results, physician notes, feedback forms, and comments regarding incident reports for the FAA Wildlife Strike database that we have used in this book. These text entry fields represent a wide range of content, from structured data to unstructured data, making it challenging to extract meaning from them without the assistance of sophisticated natural language processing tools. The Remarks field of the FAA Wildlife Strike database contains text that was presumably entered by people involved in filling out the incident form about an aircraft striking wildlife. A few examples of the remarks for recent entries are shown in Power BI in the following screenshot: Figure 1 – Examples of Remarks from the FAA Wildlife Strike Database You will notice that the remarks have a great deal of variability in the format of the content, the length of the content, and the acronyms that were used. Testing one of the entries by simply adding a statement at the beginning to “Summarize the following:” yields the following result: Figure 2 – Summarizing the remarks for a single incident using ChatGPT Summarizing data for a less detailed Remarks field yields the following results: Figure 3 – Summarization of a sparsely populated results field In order to obtain uniform summaries from the FAA Wildlife Strike data's Remarks field, one must consider entries that vary in robustness, sparsity, completeness of sentences, and the presence of acronyms and quick notes. The workshop accompanying this technical book is your chance to experiment with various data fields and explore diverse outcomes. Both the book and the Packt GitHub site will utilize a standardized format as input to a GPT model that can incorporate event data and produce a consistent summary for each row. An example of the format is as follows:  Summarize the following in three sentences: A [Operator] [Aircraft] struck a [Species]. Remarks on the FAA report were: [Remarks]. Using data from an FAA Wildlife Strike Database event to test this approach in OpenAI ChatGPT is shown in the following screenshot: Figure 4 – OpenAI ChatGPT testing a summarization of the remarks field Next, you test another scenario that had more robust text in the Remarks field: Figure 5 – Another scenario with robust remarks tested using OpenAI ChatGPT SummaryThis article explores how ChatGPT can revolutionize the process of condensing complex datasets into concise and informative summaries. By leveraging its powerful language generation capabilities, ChatGPT enables researchers, analysts, and decision-makers to quickly extract key insights and make informed decisions. Dive into the world of data summarization with ChatGPT and unlock new possibilities for efficient data analysis and knowledge extraction. Author Bio:Greg Beaumont is a Data Architect at Microsoft; Greg is an expert in solving complex problems and creating value for customers. With a focus on the healthcare industry, Greg works closely with customers to plan enterprise analytics strategies, evaluate new tools and products, conduct training sessions and hackathons, and architect solutions that improve the quality of care and reduce costs. With years of experience in data architecture and a passion for innovation, Greg has a unique ability to identify and solve complex challenges. He is a trusted advisor to his customers and is always seeking new ways to drive progress and help organizations thrive. For more than 15 years, Greg has worked with healthcare customers who strive to improve patient outcomes and find opportunities for efficiencies. He is a veteran of the Microsoft data speaker network and has worked with hundreds of customers on their data management and analytics strategies.You can follow Greg on LinkedIn
Read more
  • 0
  • 0
  • 17333

article-image-data-cleaning-made-easy-with-chatgpt
Sagar Lad
02 Jun 2023
5 min read
Save for later

Data Cleaning Made Easy with ChatGPT

Sagar Lad
02 Jun 2023
5 min read
Identifying inconsistencies and inaccuracies in the data is a vital part of the data analysis process. ChatGPT is a natural language processing tool powered by AI that enables users to have human-like conversations and helps them complete tasks quickly. In this article, we'll focus on how chatGPT can make the process of data cleansing and cleaning more efficient. Data Cleansing/Cleaning with ChatGPT Given the volume, velocity, and variety of data we deal with nowadays, manually carrying out the data cleansing task is a very time-consuming process. Data cleansing, the removal of duplicate data, data validity, uniqueness, consistency, and correctness are all steps taken to increase the quality of the data. Better business insights and the ability for business users to make wise decisions are provided by cleansed data. Data cleansing activities go via a series of steps, starting with gathering the data and ending with integrating, producing, and normalizing the data, as shown in the image below: Image 1: Data cleansing cycle  The majority of corporate organizations carry out the following tasks as part of the exploratory data analysis's data cleansing procedure: Identify and clean up Duplicate Values Fill Null Values with a default valueRectify and Correct inconsistent dataStandardising date formats Standardising  name or addressArea codes out of phone numbersFlattening nested data structuresErasing incomplete dataDetecting conflicts in the database The strength of ChatGPT allows us to perform time-consuming and extremely boring tasks like data purification with ease. Let's use the example of employee details for the banking industry to better comprehend it which has columns: Employee ID, Employee Name, Department Name, and Joining Date. While reviewing the data, we discovered a number of data quality concerns that must be resolved before we can truly use this data for analytics. Example: Employee Name is inconsistent - some instances use lowercase while others use uppercase letters. The data format is not uniform for the joining date column. Traditional Way of Working To clean up this data in Excel, we must manually construct the formulas and apply functions like TRIM, UPPER, or LOWER before using it for analytics. It calls for development work, and upkeep of Excel logic without version control, history, etc. Sounds extremely tedious, isn’t it? Working with ChatGPT We can utilize ChatGPT to automate the aforementioned data purification operation by implementing some Python code. In this example, we'll use the ChatGPT Python code to demonstrate how to standardize the name for the employee's name and the date format for the joining date.ChatGPT prompt:Here is the prompt that we can provide in the text format, in case you plan to copy and paste:             Employee ID | Employee Name | Department Name | Joining Date            214                   john Root                  HR                             1-06-2003            435                   STEVE Smith             Retail                          21-Feb-05            654                   Sachin WALA            OPSI                           25-July-1999 Above is the employee data source which should be cleaned. Employee names are not consistent, and the joining date is not in a uniform date format. Generate a Python code to create accurate data. Image 2: Input to the ChatGPTWe pass a dataset and a description of how and for which columns we want to clean the data as seen in the image above. Output from ChatGPTChatGPT automatically creates Python code with a variety of generic functions to clean the specified column in accordance with our specifications. The ChatGPT tool's output Python code is shown below.      Image 3: Output Python code from ChatGPT After running the Python code generated by ChatGPT on the stated data, ChatGPT also displays a sample result on the data here. It is clear that employee names are now uniform, and the joining date is likewise shown using a common date format.             Image 4 : Sample output from ChatGPT This Python code can be used to clean any data source in the future when we need to do so, not just the employee dataset. Therefore, using ChatGPT's capabilities, we can develop a fully automated data cleaning process that is precise, effective, and totally automated.There are also tools on the market like RATH, which has an integration with ChatGPT, to simplify the data analysis workflow and increase your productivity without putting in a lot of manual work if you are having trouble with a large volume of data and need to spend a lot of time performing the data cleaning activity ConclusionThis article gave you a fundamental grasp of the data cleaning/cleansing procedure, which will enable you to use the data to make more trustworthy decisions. The most effective method for using ChatGPT to clean your data simply and effectively for any data quantities. Author Bio:Sagar Lad is a Cloud Data Solution Architect with a leading organisation and has deep expertise in designing and building Enterprise-grade Intelligent Azure Data and Analytics Solutions. He is a published author, content writer, Microsoft Certified Trainer, and C# Corner MVP.You can follow Sagar on - Medium, Amazon, LinkedIn
Read more
  • 0
  • 0
  • 16723

article-image-responding-to-generative-ai-from-an-ethical-standpoint
Dr. Alex Antic
02 Jun 2023
7 min read
Save for later

Responding to Generative AI from an Ethical Standpoint

Dr. Alex Antic
02 Jun 2023
7 min read
This article is an excerpt from the book Creators of Intelligence, by Dr. Alex Antic. This book will provide you with insights from 18 AI leaders on how to build a rewarding data science career. As Generative Artificial Intelligence (AI) continues to advance, the need for ethical considerations becomes increasingly vital. In this article, we engage in a conversation between a Generative AI expert, Edward Santow, and an author to uncover practical ways to incorporate ethics into the rapidly evolving landscape of generative AI, ensuring its responsible and beneficial implementation. Importance of Ethics in Generative AI Generative AI is a rapidly developing field with the potential to revolutionize many aspects of our lives. However, it also raises a number of ethical concerns. Some of the most pressing ethical issues in generative AI include: Bias: Generative AI models are trained on large datasets of data, which can introduce bias into the models. This bias can then be reflected in the outputs of the models, such as the images, text, or music that they generate. Transparency: Generative AI models are often complex and difficult to understand. This can make it difficult to assess how the models work and to identify any potential biases. Accountability: If a generative AI model is used to generate harmful content, such as deepfakes or hate speech, it is important to be able to hold the developers of the model accountable. Privacy: Generative AI models can be used to generate content that is based on personal data. This raises concerns about the privacy of individuals whose data is used to train the models. Fairness: Generative AI models should be used in a way that is fair and does not discriminate against any particular group of people. It is important to address these ethical concerns in order to ensure that generative AI is used in a responsible and ethical manner. Some of the steps that can be taken to address these concerns include: Using unbiased data: When training generative AI models, it is important to use data that is as unbiased as possible. This can help to reduce the risk of bias in the models. Making models transparent: It is important to make generative AI models as transparent as possible. This can help to identify any potential biases and to make it easier to understand how the models work. Holding developers accountable: If a generative AI model is used to generate harmful content, it is important to be able to hold the developers of the model accountable. This can be done by developing clear guidelines and regulations for the development and use of generative AI. Protecting privacy: It is important to protect the privacy of individuals whose data is used to train generative AI models. This can be done by using anonymized data or by obtaining consent from individuals before using their data.Ensuring fairness: Generative AI models should be used in a way that is fair and does not discriminate against any group of people. This can be done by developing ethical guidelines for the use of generative AI.By addressing these ethical concerns, we can help to ensure that generative AI is used in a responsible and ethical manner. Ed Santow’s Opinion on Implementing Ethics Given the popularity and advances in generative AI tools, such as ChatGPT, I’d like to get your thoughts on how generative AI has impacted ethics frameworks. What complications has it added? Ed Santow: In one sense, it hasn’t, as the frameworks are broad enough and apply to AI generally, and their application depends on adapting to the specific context in which they’re being applied. One of the great advantages of this is that generative AI is included within its scope. It may be a newer form of AI, as compared with analytical AI, but existing AI ethics frameworks already cover a range of privacy and human rights issue, so they are applicable. The previous work to create those frameworks has made it easier and faster to adapt to the specific aspects of generative AI from an ethical perspective. One of the main complexities is the relatively low community understanding of how generative AI actually works and, particularly, the science behind it. Very few people can distinguish between analytical and generative AI. Most people in senior roles haven’t made the distinction yet or identified the true impact. The issue is, if you don’t understand the underlying technology well enough, then it’s difficult to make the frameworks work in practice. Analytical and generative AI share similar core science. However, generative AI can pose greater risks than simple classification AI. But the nature and scale of those risks generally haven’t been worked through in most organizations. Simply setting black-and-white rules – such as you can or can’t use generative AI – isn’t usually the best answer. You need to understand how to safely use it.   How will organizations need to adapt their ethical frameworks in response to generative AI?  Ed Santow: First and foremost, they need to understand that skills and knowledge are vital. They need to upskill their staff and develop a better understanding of the technology and its implications – and this applies at all levels of the organization. Second, they need to set a nuanced policy framework, outline how to use such technology safely and develop appropriate risk mitigation procedures that can flag when it’s not safe to rely on the outputs of generative AI applications. Most AI ethics frameworks don’t go into this level of detail. Finally, consideration needs to be given to how generative AI can be used lawfully. For example, entering confidential client data – or proprietary company data – into ChatGPT is likely to be unlawful, yet we also know this is happening.  What advice can you offer CDOs and senior leaders in relation to navigating some of these challenges?  Edward Santow: There are simply no shortcuts. People can’t assume that even though others in their industry are using generative AI, their organization can use it without considering the legal and ethical ramifications. They also need to be able to experiment safely with such technology. For example, a new chatbot based on generative AI shouldn’t be simply unleased on customers. They need to first test and validate it in a controlled environment to understand all the risks – including the ethical and legal ramifications. Leaders need to ensure that an appropriately safe test environment is established to mitigate any risk of harm to staff or customers. Summary In this article, we went through various ethical issues that can arise while implementing Generative AI and some ways to tackle these challenges effectively. We also learned certain practical best practices through an expert opinion from an expert in the field of Generative AI.  Author Bio :Dr. Alex Antic is an award-winning Data Science and Analytics Leader, Consultant, and Advisor, and a highly sought Speaker and Trainer, with over 20 years of experience. Alex is the CDO and co-founder of Healices Health - which focuses on advancing cancer care using Data Science and is co-founder of Two Twigs - a Data Science consulting, advisory, and training company. Alex has been described as "one of Australia’s iconic data leaders" and "one of the most premium thought leaders in data analytics globally". He was recognized in 2021 as one of the Top 5 Analytics Leaders by the Institute of Analytics Professionals of Australia (IAPA). Alex is an Adjunct Professor at RMIT University, and his qualifications include a Ph.D. in Applied Mathematics. LinkedIn
Read more
  • 0
  • 0
  • 6148
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-introduction-to-llama
Dario Radečić
02 Jun 2023
7 min read
Save for later

Introduction to LLaMA

Dario Radečić
02 Jun 2023
7 min read
It seems like everyone, and their grandmothers, are discussing Large Language Models (LLMs) these days. These models got all the hype since ChatGPT's release in late 2022. The average user might get lost in acronyms such as GPT, PaLM, or LLaMA, and that’s understandable. This article will shed some light on why you should generally care about LLMs and exactly what they bring to the table. By the end of this article, you’ll have a fundamental understanding of the LLaMA model, how it compares to other large language models, and will have the 7B flavor of LLaMA running locally on your machine. There’s no time to waste, so let’s dive straight in! The Purpose of LLaMA and Other Large Language Models The main idea behind LLMs is to understand and generate human-like text based on the input you feed into them. Ask a human-like question and you’ll get a human-like response back. You know what we’re talking about if you’ve ever tried ChatGPT. These models are typically trained on huge volumes of data, sometimes even as large as everything that has been written on the Internet over some time span. This data is then fed into the algorithms using unsupervised learning which has the task of learning words and relationships between them. Large Language Models can be generic or domain-specific. You can use a generic LLM and fine-tune it for a certain task, similar to what OpenAI did with Codex (LLM for programming).As the end-user, you can benefit from LLMs in several ways:Content generation – You can use LLMs to generate content for personal or professional purposes, such as articles, emails, social media posts, and so on.Information retrieval – LLMs help you find relevant information quickly and often do a better job when compared to a traditional web search. Just be aware of the training date cap the model has – it might not do as well on the recent events.Language assistance and translation – These models can detect spelling errors and grammar mistakes, suggest writing improvements, provide synonyms, idioms, and even provide a meaningful translation from one language to another.At the end of the day, probably everyone can find a helpful use case in a large language model.But which one should you choose? There are many publicly available models, but the one that stands out recently is LLaMA. Let’s see why and how it works next. What is LLaMA and How it Works? LLaMA stands for “Large Language Model Meta AI” and is a large language model published by – you’ve guessed it – Meta AI. It was released in February 2023 in a variety of flavors – from 7 billion to 65 billion parameters.A LLaMA model uses the Transformer architecture and works by generating probability distributions over sequences of words (or tokens). In plain English, this means the LLaMA model predicts the next most reasonable word given the sequence of input words.It’s interesting to point out that LLaMA-13B (13 billion parameters) outperforms GPT-3 on most benchmarks, even though GPT-3 has 13 times more parameters (175 billion). The more parameter-rich LLaMA (65B parameters) is on par with the best large language models we have available today, according to the official paper by Meta AI.In fact, let’s take a look at these performance differences by ourselves. The following table from the official paper summarizes it well: Figure 1 - LLaMA performance comparison with other LLMs Generally speaking, the more parameters the LLaMA model contains, the better it performs. The interesting fact is that even the 7B version is comparable in performance – or even outperforms – the models with significantly more parameters. The 7B model performs reasonably well, so how can you try it out? In the next section, you’ll have LLaMA running locally with only two shell commands. How to Run LLaMA Locally? You’ll need a couple of things to run LLaMA locally – decent hardware (doesn’t have to be the newest), a lot of hard drive space, and a couple of software dependencies installed. It doesn’t matter which operating system you’re using, as the implementation we’re about to show you is cross-platform.For reference, we ran the 7B parameter model on an M1 Pro MacBook with 16 GB of RAM. The model occupied 31 GB of storage, and you can expect this amount to grow if you choose a LLaMA flavor with more parameters.Regarding software dependencies, you’ll need a recent version of Node. We used version 18.16.0 with  npm version 9.5.1.Once you have Node installed, open up a new Terminal/CMD window and run the following command. It will install the 7B LLaMA model: npx dalai llama install 7B You might get a prompt to install  dalai first, so just type  y into the console. Once Dalai is installed, it will proceed to download the model weights. You should see something similar during this process: Figure 2 - Downloading LLaMA 7B model weights It will take some time, depending on your Internet speed. Once done, you’ll have the 7B model available in the Dalie web UI. Launch it with the following shell command:npx dalai serve This is the output you should see: Figure 3 - Running dalai web UI locally The web UI is now running locally on port 3000. As soon as you open http://localhost:3000, you’ll be presented with the interface that allows you to choose the model, tweak the parameters, and select a prompting template.For reference, we’ve selected the chatbot template and left every setting as default. The prompt we’ve entered is “What is machine learning?” Here’s what the LLaMA model with 7B parameters outputted:  Figure 4 - Dalai user interface The answer is mostly correct, but the LLaMA response started looking like a blog post toward the end (“In this article…”). As with all large language models, you can use it to draw insights, but only after some human intervention.And that’s how you can run a large language model locally! Let’s make a brief recap next. ConclusionIt’s getting easier and cheaper to train large language models, which means the number of options you’ll have is only going to grow over time.LLaMA was only recently released to the public, and today you’ve learned what it is, got a high-level overview of how it works, and how to get it running locally. You might want to tweak the 7B version if you’re not getting the desired response or opt for a version with more parameters (if your hardware allows it). Either way, have fun!Author Bio:Dario Radečić is a Senior Data Scientist at Neos, Croatia. Book author: "Machine Learning Automation with TPOT".  Owner of betterdatascience.com. You can follow him on Medium: https://medium.com/@radecicdario
Read more
  • 0
  • 0
  • 8401

article-image-chatgpt-for-information-retrieval-and-competitive-intelligence
Valentina Alto
02 Jun 2023
2 min read
Save for later

ChatGPT for Information Retrieval and Competitive Intelligence

Valentina Alto
02 Jun 2023
2 min read
This article is an excerpt from the book Modern Generative AI with ChatGPT and OpenAI Models, by Valentina Alto. This book will provide you with insights into the inner workings of the LLMs and guide you through creating your own language models. Information retrieval and competitive intelligence are fields where ChatGPT is a game-changer. It can retrieve information from its knowledge base and reframe it in an original way.One example is using ChatGPT as a search engine to provide summaries, reviews, and recommendations for books:  Alternatively, we could ask for some suggestions for a new book we wish to read based on our preferences:  If we design the prompt with specific information, ChatGPT can serve as a tool for pointing us towards the right references for research or studies. For example, asking ChatGPT to list relevant references for feedforward neural networks:  ChatGPT can also be useful for competitive intelligence. For example, generating a list of existing books with similar content:  Or providing advice on how to be competitive in the market:  ChatGPT can also suggest improvements regarding book content to make it stand out:  Overall, ChatGPT can be a valuable assistant for information retrieval and competitive intelligence. However, it's important to remember that the knowledge base cutoff is 2021, so real-time information may not be available. About the AuthorValentina Alto graduated in 2021 in Data Science. Since 2020 she has been working in Microsoft as Azure Solution Specialist and, since 2022, she focused on Data&AI workloads within the Manufacturing and Pharmaceutical industry. She has been working on customers’ projects closely with system integrators to deploy cloud architecture with a focus on datalake house and DWH, data integration and engineering, IoT and real-time analytics, Azure Machine Learning, Azure cognitive services (including Azure OpenAI Service), and PowerBI for dashboarding. She holds a BSc in Finance and an MSc degree in Data Science from Bocconi University, Milan, Italy. Since her academic journey she has been writing Tech articles about Statistics, Machine Learning, Deep Learning and AI on various publications. She has also written a book about the fundamentals of Machine Learning with Python.  You can connect with Valentina on:LinkedInMedium
Read more
  • 0
  • 0
  • 4923

article-image-customize-chatgpt-for-specific-tasks-using-effective-prompts-shot-learning
Valentina Alto
02 Jun 2023
5 min read
Save for later

Customize ChatGPT for Specific Tasks Using Effective Prompts – Shot Learning

Valentina Alto
02 Jun 2023
5 min read
This article is an excerpt from the book Modern Generative AI with ChatGPT and OpenAI Models, by Valentina Alto. This book will provide you with insights into the inner workings of the LLMs and guide you through creating your own language models. We know for the fact that OpenAI models, and hence also ChatGPT, come in a pre-trained format. They have been trained on a huge amount of data and have had their (billions of) parameters configured accordingly. However, this doesn’t mean that those models can’t learn anymore. One way to customize an OpenAI model and make it more capable of addressing specific tasks is by fine-tuning.Fine-tuning is a proper training process that requires a training dataset, compute power, and some training time (depending on the amount of data and compute instances). That is why it is worth testing another method for our model to become more skilled in specific tasks: shot learning.The idea is to let the model learn from simple examples rather than the entire dataset. Those examples are samples of the way we would like the model to respond so that the model not only learns the content but also the format, style, and taxonomy to use in its response. Furthermore, shot learning occurs directly via the prompt (as we will see in the following scenarios), so the whole experience is less time-consuming and easier to perform.The number of examples provided determines the level of shot learning we are referring to. In other words, we refer to zero-shot if no example is provided, one-shot if one example is provided, and few-shot if more than 2-3 examples are provided.Let’s focus on each of those scenarios: Zero-shot learning In this type of learning, the model is asked to perform a task for which it has not seen any training examples. The model must rely on prior knowledge or general information about the task to complete the task. For example, a zero-shot learning approach could be that of asking the model to generate a description, as defined in my prompt: One-shot learning In this type of learning, the model is given a single example of each new task it is asked to perform. The model must use its prior knowledge to generalize from this single example to perform the task. If we consider the preceding example, I could provide my model with a prompt-completion example before asking it to generate a new one: Note that the way I provided an example was similar to the structure used for fine-tuning:  Few-shot learning In this type of learning, the model is given a small number of examples (typically between 3 and 5) of each new task it is asked to perform. The model must use its prior knowledge to generalize from these examples to perform the task. Let’s continue with our example and provide the model with further examples:  The nice thing about few-shot learning is that you can also control model output in terms of how it is presented. You can also provide your model with a template of the way you would like your output to look. For example, consider the following tweet classifier:  Let’s examine the preceding figure. First, I provided ChatGPT with some examples of labeled tweets. Then, I provided the same tweets but in a different data format (list format), as well as the labels in the same format. Finally, in list format, I provided unlabeled tweets so that the model returns a list of labels. Understanding Prompt Design The output format is not the only thing you can teach your model, though. You can also teach it to act and speak with a particular jargon and taxonomy, which could help you obtain the desired result with the desired wording: Or, imagine you want to generate a chatbot called Simpy that is very funny and sarcastic while responding: We have to say, with this last one, ChatGPT nailed it.Summary Short–learning possibilities are limitless (and often more useful than Simpy) – it’s only a matter of testing and a little bit of patience in finding the proper prompt design.As mentioned previously, it is important to remember that these forms of learning are different from traditional supervised learning, as well as fine-tuning. In few-shot learning, the goal is to enable the model to learn from very few examples, and to generalize from those examples to new tasks.About the Author Valentina Alto graduated in 2021 in Data Science. Since 2020 she has been working in Microsoft as Azure Solution Specialist and, since 2022, she focused on Data&AI workloads within the Manufacturing and Pharmaceutical industry. She has been working on customers’ projects closely with system integrators to deploy cloud architecture with a focus on datalake house and DWH, data integration and engineering, IoT and real-time analytics, Azure Machine Learning, Azure cognitive services (including Azure OpenAI Service), and PowerBI for dashboarding. She holds a BSc in Finance and an MSc degree in Data Science from Bocconi University, Milan, Italy. Since her academic journey she has been writing Tech articles about Statistics, Machine Learning, Deep Learning and AI on various publications. She has also written a book about the fundamentals of Machine Learning with Python. You can connect with Valentina on:LinkedinMedium 
Read more
  • 0
  • 0
  • 12984

article-image-4-ways-to-treat-a-hallucinating-ai-with-prompt-engineering
Andrei Gheorghiu
02 Jun 2023
9 min read
Save for later

4 Ways to Treat a Hallucinating AI with Prompt Engineering

Andrei Gheorghiu
02 Jun 2023
9 min read
Hey there, fellow AI enthusiast! Are you tired of your LLM (Large Language Model) creating random, nonsensical outputs? Fear not, because today I’m opening the box of prompt engineering pills looking for something to help you reduce those pesky hallucinations.First, let's break down what we're dealing with. Prompt engineering is the art of creating input prompts for AI models in a way that guides them towards generating more accurate, relevant, and useful responses. Think of it as gently nudging your AI model in the right direction, so it doesn't end up lost in a sea of information. The word “engineering” was probably not the wisest choice in many people’s opinion but that’s already history as everybody got used to it as it is. In my opinion, it’s more of a mix of logical thinking, creativity, language, and problem-solving skills. It feels a lot like writing code but using just natural language instead of structured syntax and vocabulary. While the user gets the freedom of using their own language and depth, with great freedom comes great responsibility. An average prompt will probably result in an average answer. The issue I’m addressing in this article is just one example from the many pitfalls that can be avoided with some basic prompt hygiene when interacting with AI.Now, onto the bizarre world of hallucinations. In the AI realm, hallucinations refer to instances when an AI model (particularly LLMs) generates output that is unrelated, implausible, or just plain weird. Some of you may have been there already, asking an AI model like GPT-3 to write a paragraph about cats, only to get a response about aliens invading Earth! And while the issue has been greatly mitigated in GPT-4 and similar newer AI models, it’s still something to be concerned about, especially if you’re looking for precise, fact-based responses. To make matters worse, sometimes the hallucinated answer sounds very convincing and seems to be plausible in the given context.For example, when asked the name of the Voodoo Lady in the Monkey Island series of games ChatGPT provides a series of convincing answers, all of which are wrong: It’s a bit of a trick question, as she is simply known as the Voodoo Lady in the original series of games, but you can see how convinced ChatGPT is of the answers that it provides (and continued to provide). If I hadn’t already known the answer, then I never would have known that ChatGPT was hallucinating. What Are the Technical Reasons Why AI Models Hallucinate? Training Data: Machine learning models are trained on vast amounts of text data from diverse sources. This data may contain inconsistencies, noise, and biases. As a result, when generating text, the model might output content that is influenced by these inconsistencies or noise, leading to hallucinations.Probabilistic Nature: Generative models like GPTs are based on probabilistic techniques that predict the next token (e.g., word or character) in a sequence, given the context. They estimate the likelihood of each token appearing and sample tokens based on these probabilities. If you’ve ever watched “Family Feud” on TV, you get a pretty good idea of what token prediction means. This sampling process can sometimes result in unpredictable and implausible outputs, as the model might choose less likely tokens, generating hallucinations. To make matters worse, GPTs are usually not built to say "I don't know" when they lack information. Instead, they produce the most likely answer.  Lack of Ground Truth: Unlike supervised learning tasks where there is a clear ground truth for the model to learn from, generative tasks do not have a single correct output. Most LLMs that we use do not have the capability to check the facts in their output against a real-time validated source as they do not have Internet access. The absence of a ground truth can make it difficult for the model to learn constraints and discern what is plausible or correct, leading to the generation of hallucinated content.  Optimization Challenges: During training, the models are optimized using a loss function that measures the discrepancy between the generated output and the expected outcome. In generative tasks, this loss function may not always capture the nuances of human language, making it difficult for the model to learn the correct patterns and avoid hallucinations.Model Complexity: State-of-the-art generative models like GPT-3 have billions of parameters that make them highly expressive and capable of capturing complex patterns in the data. However, this complexity can also result in overfitting and memorization of irrelevant or spurious patterns, causing hallucinations in generated outputs.So, clearly, we have a problem to solve. Here are four tips for how to improve your prompts and get better responses from ChatGPT. Four Tips for Improving Your Prompts Not being clear and specific in your promptsTo get the best results, you must clearly understand the problem yourself first. Make sure you know what you want to achieve and keep your prompts focused on that objective. The more explicit your prompt, the better the AI model can understand what you're looking for. So instead of asking, "Tell me about the Internet," try something like, "Explain how the Internet works and its importance in modern society." By doing this, you're giving your AI model a clearer picture of what you want. Sometimes you’ll have to make your way through multiple prompt iterations to get the result you’re after. Sometimes results you'll get may steer away from the initial topic. Make sure to stay on track and avoid deviating from the task at hand. Make sure you bring the conversation back in focus, otherwise the hallucination effect may amplify. Ignoring the power of an exampleEveryone loves examples they say, even AI models! Providing examples in your prompt helps your model understand the context and generate more accurate responses. For instance, "Write a brief history of Python, similar to how the history of Java is described in this article {example}" This not only gives the AI a clear topic but also a reference point to follow. Providing a well-structured example can also save you a lot of time in explaining the output you’re expecting to receive. Without an example your prompt might be too generic, allowing too much freedom in interpretation. Think about it like a conversation. Sometimes, the best approach to make yourself understood by the other party is to provide an example. Do you want to make sure there’s no misunderstanding from the start? Include an example in your initial prompt. Not following “Divide et Impera”Have you ever tried to build IKEA furniture without instructions? It's a bit like that for AI models dealing with complex prompts. Too many nuts and bolts to keep track of. Too many variables to consider. Instead of asking the model to "Explain the process of creating a neural network," break it down into smaller, more manageable tasks like, "Step 1: Define the problem, Step 2: Collect and prepare data," and so on. This way, the AI can tackle each step individually and generate more coherent outputs. It’s also very useful when you are trying to generate a more verbose and comprehensive response and not just a simple factual answer. You can, of course, combine both approaches asking the AI to provide the steps first, and then asking for more information on each step. Relying on the first response you receiveAs most LLMs in use today do not provide enough transparency in their reasoning process, working with them sometimes feels like interacting with a magic box. The non-deterministic nature of generative AI can further amplify this problem, so when you need precision it's best to experiment with various prompt formats and compare the results. Pro tip: some open-source models can already be queried in parallel using this website:   Or, when interacting with a single AI model, try multiple approaches for your query like rephrasing the prompt, asking a question or presenting it as a statement.For example, if you're looking for information about cloud computing, you could try:"What is cloud computing and how does it work?""Explain cloud computing and its benefits.""Cloud computing has transformed the IT industry; discuss its impact and future potential."Some LLMs, such as Google's Bard, provide multiple responses by default so you can pick the most suitable from among them. Compare the outputs. Validate any important facts with other independent sources. Look for implausible or weird responses. Although a hallucination is possible, by using different prompts you’ll greatly reduce the probability of generating the same hallucination every time and therefore it’s going to be easier to detect it.Returning to our Voodoo Lady example earlier, by rephrasing the question we can get the right answer from ChatGPT.  And there you have it! By trying to avoid these common mistakes you'll be well on your way to minimizing AI hallucinations and getting the output you're looking for. We all know how fast and unpredictable this domain can be, so the best approach is to learn together and share best practices among the community. The best prompt engineering books have not yet been written and there’s a ton of new things to learn about this emergent technology, so let’s stay in touch and share our findings!Happy prompting!  About the Author Andrei Gheorghiu is an experienced trainer with a passion for helping learners achieve their maximum potential. He always strives to bring a high level of expertise and empathy to his teaching. With a background in IT audit, information security, and IT service management, Andrei has delivered training to over 10,000 students across different industries and countries. He is also a Certified Information Systems Security Professional and Certified Information Systems Auditor, with a keen interest in digital domains like Security Management and Artificial Intelligence. In his free time, Andrei enjoys trail running, photography, video editing and exploring the latest developments in technology.You can connect with Andrei on:LinkedinTwitter
Read more
  • 1
  • 0
  • 20489
article-image-learning-essential-linux-commands-for-navigating-the-shell-effectively
Expert Network
16 Aug 2021
9 min read
Save for later

Learning Essential Linux Commands for Navigating the Shell Effectively 

Expert Network
16 Aug 2021
9 min read
Once we learn how to deploy an Ubuntu server, how to manage users, and how to manage software packages, we should take a moment to learn some important concepts and commands that will allow us to build more of the foundational knowledge that will serve us well while understanding the advanced concepts and treading the path of expertise. These foundational concepts include core Linux commands for navigating the shell.  This article is an excerpt from the book, Mastering Ubuntu Server, Third Edition by Jeremy “Jay” La Croix – A hands-on book that will teach you how to deploy, maintain and troubleshoot Ubuntu Server.    Learning essential Linux commands Building a solid competency on the command line is essential and effectively gives any system administrator or engineer superpowers. Our new abilities won’t allow us to leap tall buildings in a single bound, but will definitely enable us to execute terminal commands as if we’re ninjas. While we won’t master the art of using the command line in this section (that can only come with years and experience), we will definitely become more confident.  First, let’s talk about moving from one place to another within the Linux filesystem. Specifically, by “Linux filesystem”, I’m referring to the default structure of the various folders (also referred to as “directories”) contained within your Ubuntu installation. The Linux filesystem contains many important directories, each with their own designated purpose, which we’ll talk about in more detail in the book. Before we can explore that further, we’ll need to learn how to navigate from one directory to another. The first command we’ll cover in this section relative to navigating the filesystem will clarify the directory you’re currently working from. For that, we have the pwd command. The pwd command pwd stands for print working directory, and shows you where you currently are in the filesystem. If you run it, you may see output such as this:  Figure 4.1: Viewing the current working directory  In this example, when I ran pwd, the output informed me that my current working directory is /home/jay. This is known as your home directory and, by default, every user has one. This is where all the files for your user account will reside by default. Sure, you can create files anywhere you’d like, even outside your home directory if you have permission to do so or you use sudo. But just because you can doesn’t mean you should. As you’ll learn in this article, the Linux filesystem has a designated place for just about everything. But your home directory, located at /home/<username>, is yours. You own it, you control it—it’s your home on the server. In the early 2000s, Linux installations with a graphical user interface even depicted your home directory with an icon of a house.  Typically, files that you create in your home directory will have permission string similar to this:  -rw-rw-r-- 1 jay  jay      0 Jul  5 14:10 testfile.txt  You can see by default, files you create in your home directory are owned by your user, your group, and are readable by all three categories (user, group, and other).  The cd command To change our current directory and navigate to another, we can use the cd command along with a path we’d like to move to:  cd /etc  Now, I haven’t gone over the file and directory layout yet, so I just randomly picked the etc directory. The forward slash at the beginning designates the beginning of the filesystem. More on that later. Now, we’re in the /etc directory, and our command prompt has even changed as well:  Figure 4.2: Command prompt and pwd command after changing a directory  As you could probably guess, the cd command stands for change directory, and it’s how you move your working directory from one to another while navigating around. You can use the following command, for example, to return back to the home directory:  cd /home/<user>  In fact, there are several ways to return home, a few of which are demonstrated in the following screenshot:    Figure 4.3: Other ways of navigating to the home directory  The first command, cd -, doesn’t actually have anything to do with your home directory specifically. It’s a neat trick to return you to whatever directory you were in most previously. For me, the cd – command took me to the previous directory I was just in, which just so happened to be /home/jay. The second command, cd /home/jay, took me directly to my home directory since I called out the entire path. The last command, cd ~, also took me to my home directory. This is because ~ is shorthand for the full path to your home directory, so you don’t really ever have to type out the entire path to /home/<user>. You can just refer to that path simply as ~.  The ls command Another essential command is ls. The ls command lists the contents of the current working directory. We probably don’t have any contents in our home directory yet. But if we navigate to /etc by running cd /etc, as we did earlier, and then execute ls, we’ll see that the /etc</span> directory has a number of files in it. Go ahead and try it yourself and see:  cd /etc ls  We didn’t actually have to change our working directory to /etc just to list the contents. We could’ve just executed the following command:  ls /etc  Even better, we can run:  ls -l /etc  This gives us the contents in a long list, which I think is much easier to understand. It will show each directory or file entry on its own line, along with the permission string. But you probably already must be knowing ls as well as ls -l so I won’t go into too much more detail here. The -l portion of the ls command in that example is known as an argument. I’m not referring to an argument such as the ever-ensuing debate in the Linux community over which command-line text editor is the best between vim and emacs (it’s clearly vim). Instead, I’m referring to the concept of an argument in shell commands that allow you to override the defaults, or feed options to the command in some way, such as in this example, where we format the output of ls to be in a long list.  The rm command The rm command is another one that we touched on in, when we were discussing manually removing the home directory of a user that was removed from the system. So, at this point, you’re probably well aware of that command and what it does (it removes files and directories). It’s a potentially dangerous command, as you could use it to accidentally remove something that you shouldn’t have. We used the following command to remove the home directory of user dscully:  rm -r /home/dscully  As you can see, we’re using the -r argument to alter the behavior of the rm command, which, by default, doesn’t remove directories but only files. The -r argument instructs rm to remove everything recursively, even if it’s a directory. The -r argument will also remove subdirectories of the path as well, so you’ll definitely want to be careful with this command. As I’ve mentioned earlier in the book, if you use sudo with rm, you can hypothetically delete your entire Ubuntu installation!  Another option offered by rm is the -f argument which is short for force, and it tells rm not to prompt before removing things. This argument won’t be needed as often, and use cases for it are outside the scope of this article. But keep in mind that it exists, should you need it.  The touch command Another foundational command that’s good to know is touch, which actually serves two purposes. First, assuming you have permission to do so in your current working directory, the touch command will create an empty file if it doesn’t already exist. Second, the touch command will update the modification time of a file or directory if it does already exist:  Figure 4.4: Experimenting with the touch command  To illustrate this, in the related screenshot, I ran several commands. First, I ran the following command to create an empty file:  touch testfile.txt  That file didn’t exist before, so when I ran ls -l afterward, it showed the newly created file with a size of 0 bytes. Next, I ran the touch testfile.txt command again a minute later, and you can see in the screenshot that the modification time went from 15:12 to 15:13.  When it comes to viewing the contents of a file, we’ll get to that later on in the book, Mastering Ubuntu Server, Third Edition. And there are definitely more commands that we’ll need to learn to build the basis of our foundation. But for now, let’s take a break from the foundational concepts to understand the Linux filesystem layout better.  Summary There are more Linux commands than you will never be able to memorize. Most of us just memorize our favorite commands and variations of commands. You’ll develop your own menu of these commands as you learn and expand your knowledge. In this article, we covered many of the foundational commands that are, for the most part, essential. Commands such as grep, cat, cd, ls, and others were explored this time around.  About Jeremy “Jay” La Croix is a technologist and open-source enthusiast, specializing in Linux. Jay is currently the director of Cloud Services, Adaptavist. He has a net field experience of 20 years across different firms as a Solutions Architect and holds a master’s degree in Information Systems Technology Management from Capella University.     In addition, Jay also has an active Linux-focused YouTube channel with over 186K followers and 15.9M views, available at LearnLinux.tv, where he posts instructional tutorial videos and other Linux-related content.
Read more
  • 0
  • 0
  • 40962

article-image-gain-practical-expertise-latest-edition-software-architecture-with-c-sharp9-dotnet5
Expert Network
08 Jul 2021
3 min read
Save for later

Gain Practical Expertise with the Latest Edition of Software Architecture with C# 9 and .NET 5 

Expert Network
08 Jul 2021
3 min read
Software architecture is one of the most discussed topics in the software industry today, and its importance will certainly grow more in the future. But the speed at which new features are added to these software solutions keeps increasing, and new architectural opportunities keep emerging. To strengthen your command on this, Packt brings to you the Second Edition of Software Architecture with C# 9 and .NET 5 by Gabriel Baptista and Francesco Abbruzzese – a fully revised and expanded guide, featuring the latest features of .NET 5 and C# 9.  This book covers the most common design patterns and frameworks involved in modern cloud-based and distributed software architectures. It discusses when and how to use each pattern, by providing you with practical real-world scenarios. This book also presents techniques and processes such as DevOps, microservices, Kubernetes, continuous integration, and cloud computing, so that you can have a best-in-class software solution developed and delivered for your customers.   This book will help you to understand the product that your customer wants from you. It will guide you to deliver and solve the biggest problems you can face during development. It also covers the do's and don'ts that you need to follow when you manage your application in a cloud-based environment. You will learn about different architectural approaches, such as layered architectures, service-oriented architecture, microservices, Single Page Applications, and cloud architecture, and understand how to apply them to specific business requirements.   Finally, you will deploy code in remote environments or on the cloud using Azure. All the concepts in this book will be explained with the help of real-world practical use cases where design principles make the difference when creating safe and robust applications. By the end of the book, you will be able to develop and deliver highly scalable and secure enterprise-ready applications that meet the end customers' business needs.   It is worth mentioning that Software Architecture with C# 9 and .NET 5, Second Edition will not only cover the best practices that a software architect should follow for developing C# and .NET Core solutions, but it will also discuss all the environments that we need to master in order to develop a software product according to the latest trends.   This second edition is improved in code, and adapted to the new opportunities offered by C# 9 and .Net 5. We added all new frameworks and technologies such as gRPC, and Blazor, and described Kubernetes in more detail in a dedicated chapter.   To get the most out of this book, understand it as a guidance that you may want to revisit many times for different circumstances. Do not forget to have Visual Studio Community 2019 or higher installed and be sure that you understand C# .NET principles.
Read more
  • 0
  • 0
  • 29869

article-image-understanding-the-foundation-of-protocol-oriented-design
Expert Network
30 Jun 2021
7 min read
Save for later

Understanding the Foundation of Protocol-oriented Design

Expert Network
30 Jun 2021
7 min read
When Apple announced Swift 2 at the World Wide Developers Conference (WWDC) in 2016, they also declared that Swift was the world’s first protocol-oriented programming (POP) language. From its name, we might assume that POP is all about protocol; however, that would be a wrong assumption. POP is about so much more than just protocol; it is actually a new way of not only writing applications but also thinking about programming. This article is an excerpt from the book Mastering Swift, 6th Edition by Jon Hoffman. In this article, we will discuss a protocol-oriented design and how we can use protocols and protocol extensions to replace superclasses. We will look at how to define animal types for a video game in a protocol-oriented way. Requirements When we develop applications, we usually have a set of requirements that we need to develop against. With that in mind, let’s define the requirements for the animal types that we will be creating in this article: We will have three categories of animals: land, sea, and air. Animals may be members of multiple categories. For example, an alligator can be a member of both the land and sea categories. Animals may attack and/or move when they are on a tile that matches the categories they are in. Animals will start off with a certain number of hit points, and if those hit points reach 0 or less, then they will be considered dead. POP Design We will start off by looking at how we would design the animal types needed and the relationships between them. Figure 1 shows our protocol-oriented design: Figure 1: Protocol-oriented design In this design, we use three techniques: protocol inheritance, protocol composition, and protocol extensions. Protocol inheritance Protocol inheritance is where one protocol can inherit the requirements from one or more additional protocols. We can also inherit requirements from multiple protocols, whereas a class in Swift can have only one superclass. Protocol inheritance is extremely powerful because we can define several smaller protocols and mix/match them to create larger protocols. You will want to be careful not to create protocols that are too granular because they will become hard to maintain and manage. Protocol composition Protocol composition allows types to conform to more than one protocol. With protocol-oriented design, we are encouraged to create multiple smaller protocols with very specific requirements. Let’s look at how protocol composition works. Protocol inheritance and composition are really powerful features but can also cause problems if used wrongly. Protocol composition and inheritance may not seem that powerful on their own; however, when we combine them with protocol extensions, we have a very powerful programming paradigm. Let’s look at how powerful this paradigm is. Protocol-oriented design — putting it all together We will begin by writing the Animal superclass as a protocol: protocol Animal { var hitPoints: Int { get set } } In the Animal protocol, the only item that we are defining is the hitPoints property. If we were putting in all the requirements for an animal in a video game, this protocol would contain all the requirements that would be common to every animal. We only need to add the hitPoints property to this protocol. Next, we need to add an Animal protocol extension, which will contain the functionality that is common for all types that conform to the protocol. Our Animal protocol extension would contain the following code: extension Animal { mutating func takeHit(amount: Int) { hitPoints -= amount } func hitPointsRemaining() -> Int { return hitPoints } func isAlive() -> Bool { return hitPoints > 0 ? true : false } } The Animal protocol extension contains the same takeHit(), hitPointsRemaining(), and isAlive() methods. Any type that conforms to the Animal protocol will automatically inherit these three methods. Now let’s define our LandAnimal, SeaAnimal, and AirAnimal protocols. These protocols will define the requirements for the land, sea, and air animals respectively: protocol LandAnimal: Animal { var landAttack: Bool { get } var landMovement: Bool { get } func doLandAttack() func doLandMovement() } protocol SeaAnimal: Animal { var seaAttack: Bool { get } var seaMovement: Bool { get } func doSeaAttack() func doSeaMovement() } protocol AirAnimal: Animal { var airAttack: Bool { get } var airMovement: Bool { get } func doAirAttack() func doAirMovement() } These three protocols only contain the functionality needed for their particular type of animal. Each of these protocols only contains four lines of code. This makes our protocol design much easier to read and manage. The protocol design is also much safer because the functionalities for the various animal types are isolated in their own protocols rather than being embedded in a giant superclass. We are also able to avoid the use of flags to define the animal category and, instead, define the category of the animal by the protocols it conforms to. In a full design, we would probably need to add some protocol extensions for each of the animal types, but we do not need them for our example here. Now, let’s look at how we would create our Lion and Alligator types using protocol-oriented design: struct Lion: LandAnimal { var hitPoints = 20 let landAttack = true let landMovement = true func doLandAttack() { print(“Lion Attack”) } func doLandMovement() { print(“Lion Move”) } } struct Alligator: LandAnimal, SeaAnimal { var hitPoints = 35 let landAttack = true let landMovement = true let seaAttack = true let seaMovement = true func doLandAttack() { print(“Alligator Land Attack”) } func doLandMovement() { print(“Alligator Land Move”) } func doSeaAttack() { print(“Alligator Sea Attack”) } func doSeaMovement() { print(“Alligator Sea Move”) } } Notice that we specify that the Lion type conforms to the LandAnimal protocol, while the Alligator type conforms to both the LandAnimal and SeaAnimal protocols. As we saw previously, having a single type that conforms to multiple protocols is called protocol composition and is what allows us to use smaller protocols, rather than one giant monolithic superclass. Both the Lion and Alligator types originate from the Animal protocol; therefore, they will inherit the functionality added with the Animal protocol extension. If our animal type protocols also had extensions, then they would also inherit the function added by those extensions. With protocol inheritance, composition, and extensions, our concrete types contain only the functionality needed by the particular animal types that they conform to. Since the Lion and Alligator types originate from the Animal protocol, we can use polymorphism. Let’s look at how this works: var animals = [Animal]() animals.append(Alligator()) animals.append(Alligator()) animals.append(Lion()) for (index, animal) in animals.enumerated() { if let _ = animal as? AirAnimal { print(“Animal at \(index) is Air”) } if let _ = animal as? LandAnimal { print(“Animal at \(index) is Land”) } if let _ = animal as? SeaAnimal { print(“Animal at \(index) is Sea”) } } In this example, we create an array that will contain Animal types named animals. We then create two instances of the Alligator type and one instance of the Lion type that are added to the animals array. Finally, we use a for-in loop to loop through the array and print out the animal type based on the protocol that the instance conforms to. Upgrade your knowledge and become an expert in the latest version of the Swift programming language with Mastering Swift 5.3, 6th Edition by Jon Hoffman. About Jon Hoffman has over 25 years of experience in the field of information technology. He has worked in the areas of system administration, network administration, network security, application development, and architecture. Currently, Jon works as an Enterprise Software Manager for Syn-Tech Systems.
Read more
  • 0
  • 0
  • 42891
article-image-top-6-cybersecurity-books-from-packt-to-accelerate-your-career
Expert Network
28 Jun 2021
7 min read
Save for later

Top 6 Cybersecurity Books from Packt to Accelerate Your Career

Expert Network
28 Jun 2021
7 min read
With new technology threats, rising international tensions, and state-sponsored cyber-attacks, cybersecurity is more important than ever. In organizations worldwide, there is not only a dire need for cybersecurity analysts, engineers, and consultants but the senior management executives and leaders are expected to be cognizant of the possible threats and risk management. The era of cyberwarfare is now upon us. What we do now and how we determine what we will do in the future is the difference between whether our businesses live or die and whether our digital self-survives the digital battlefield.  In this article, we'll discuss 6 titles from Packt’s bank of cybersecurity resources for everyone from an aspiring cybersecurity professional to an expert. Adversarial Tradecraft in Cybersecurity  A comprehensive guide that helps you master cutting-edge techniques and countermeasures to protect your organization from live hackers. It enables you to leverage cyber deception in your operations to gain an edge over the competition.  Little has been written about how to act when live hackers attack your system and run amok. Even experienced hackers sometimes tend to struggle when they realize the network defender has caught them and is zoning in on their implants in real-time. This book provides tips and tricks all along the kill chain of an attack, showing where hackers can have the upper hand in a live conflict and how defenders can outsmart them in this adversarial game of computer cat and mouse.  This book contains two subsections in each chapter, specifically focusing on the offensive and defensive teams. Pentesters to red teamers, SOC analysis to incident response, attackers, defenders, general hackers, advanced computer users, and security engineers should gain a lot from this book. This book will also be beneficial to those getting into purple teaming or adversarial simulations, as it includes processes for gaining an advantage over the other team.  The author, Dan Borges, is a passionate programmer and security researcher who has worked in security positions for companies such as Uber, Mandiant, and CrowdStrike. Dan has been programming various devices for >20 years, with 14+ years in the security industry.  Cybersecurity – Attack and Defense Strategies, Second Edition  A book that enables you to counter modern threats and employ state-of-the-art tools and techniques to protect your organization against cybercriminals. It is a completely revised new edition of the bestselling book, covering the very latest security threats and defense mechanisms including a detailed overview of Cloud Security Posture Management (CSPM) and an assessment of the current threat landscape, with additional focus on new IoT threats and cryptomining.  This book is for IT professionals venturing into the IT security domain, IT pentesters, security consultants, or those looking to perform ethical hacking. Prior knowledge of penetration testing is beneficial.  This book is authored by Yuri Diogenes and Dr. Erdal Ozkaya. Yuri Diogenes is a professor at EC-Council University for their master's degree in cybersecurity and a Senior Program Manager at Microsoft for Azure Security Center. Dr. Erdal Ozkaya is a leading Cybersecurity Professional with business development, management, and academic skills who focuses on securing Cyber Space and sharing his real-life skills as a Security Advisor, Speaker, Lecturer, and Author.  Cyber Minds  This book comprises insights on cybersecurity across the cloud, data, artificial intelligence, blockchain, and IoT to keep you cyber safe. Shira Rubinoff's Cyber Minds brings together the top authorities in cybersecurity to discuss the emergent threats that face industries, societies, militaries, and governments today. Cyber Minds serves as a strategic briefing on cybersecurity and data safety, collecting expert insights from sector security leaders. This book will help you to arm and inform yourself of what you need to know to keep your business – or your country – safe.  This book is essential reading for business leaders, the C-Suite, board members, IT decision-makers within an organization, and anyone with a responsibility for cybersecurity.  The author, Shira Rubinoff is a recognized cybersecurity executive, cybersecurity and blockchain advisor, global keynote speaker, and influencer who has built two cybersecurity product companies and led multiple women-in-technology efforts.  Cyber Warfare – Truth, Tactics, and Strategies  Cyber Warfare – Truth, Tactics, and Strategies is as real-life and up-to-date as cyber can possibly be, with examples of actual attacks and defense techniques, tools, and strategies presented for you to learn how to think about defending your own systems and data.  This book introduces you to strategic concepts and truths to help you and your organization survive on the battleground of cyber warfare. The book not only covers cyber warfare, but also looks at the political, cultural, and geographical influences that pertain to these attack methods and helps you understand the motivation and impacts that are likely in each scenario.  This book is for any engineer, leader, or professional with either responsibility for cybersecurity within their organizations, or an interest in working in this ever-growing field.  The author, Dr. Chase Cunningham holds a Ph.D. and M.S. in computer science from Colorado Technical University and a B.S. from American Military University focused on counter-terrorism operations in cyberspace.  Incident Response in the Age of Cloud  This book is a comprehensive guide for organizations on how to prepare for cyber-attacks and control cyber threats and network security breaches in a way that decreases damage, recovery time, and costs, facilitating the adaptation of existing strategies to cloud-based environments.  It is aimed at first-time incident responders, cybersecurity enthusiasts who want to get into IR, and anyone who is responsible for maintaining business security. This book will also interest CIOs, CISOs, and members of IR, SOC, and CSIRT teams. However, IR is not just about information technology or security teams, and anyone with legal, HR, media, or other active business roles would benefit from this book.   The book assumes you have some admin experience. No prior DFIR experience is required. Some infosec knowledge will be a plus but isn’t mandatory.  The author, Dr. Erdal Ozkaya, is a technically sophisticated executive leader with a solid education and strong business acumen. Over the course of his progressive career, he has developed a keen aptitude for facilitating the integration of standard operating procedures that ensure the optimal functionality of all technical functions and systems.  Cybersecurity Threats, Malware Trends, and Strategies   This book trains you to mitigate exploits, malware, phishing, and other social engineering attacks. After scrutinizing numerous cybersecurity strategies, Microsoft's former Global Chief Security Advisor provides unique insights on the evolution of the threat landscape and how enterprises can address modern cybersecurity challenges.    The book will provide you with an evaluation of the various cybersecurity strategies that have ultimately failed over the past twenty years, along with one or two that have actually worked. It will help executives and security and compliance professionals understand how cloud computing is a game-changer for them.  This book is designed to benefit senior management at commercial sector and public sector organizations, including Chief Information Security Officers (CISOs) and other senior managers of cybersecurity groups, Chief Information Officers (CIOs), Chief Technology Officers (CTOs), and senior IT managers who want to explore the entire spectrum of cybersecurity, from threat hunting and security risk management to malware analysis.  The author, Tim Rains worked at Microsoft for the better part of two decades where he held a number of roles including Global Chief Security Advisor, Director of Security, Identity and Enterprise Mobility, Director of Trustworthy Computing, and was a founding technical leader of Microsoft's customer-facing Security Incident Response team.  Summary  If you aspire to become a cybersecurity expert, any good study/reference material is as important as hands-on training and practical understanding. By choosing a suitable guide, one can drastically accelerate the learning graph and carve out one’s own successful career trajectory. 
Read more
  • 0
  • 0
  • 30276

article-image-exploring-the-strategy-behavioral-design-pattern-in-node-js
Expert Network
02 Jun 2021
10 min read
Save for later

Exploring the Strategy Behavioral Design Pattern in Node.js

Expert Network
02 Jun 2021
10 min read
A design pattern is a reusable solution to a recurring problem. The term is really broad in its definition and can span multiple domains of an application. However, the term is often associated with a well-known set of object-oriented patterns that were popularized in the 90s by the book, Design Patterns: Elements of Reusable Object- Oriented Software, Pearson Education, by the almost legendary Gang of Four (GoF): Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. This article is an excerpt from the book Node.js Design Patterns, Third Edition by Mario Casciaro and Luciano Mammino – a comprehensive guide for learning proven patterns, techniques, and tricks to take full advantage of the Node.js platform. In this article, we’ll look at the behavior of components in software design. We’ll learn how to combine objects and how to define the way they communicate so that the behavior of the resulting structure becomes extensible, modular, reusable, and adaptable. After introducing all the behavioral design patterns, we will dive deep into the details of the strategy pattern. Now, it's time to roll up your sleeves and get your hands dirty with some behavioral design patterns. Types of Behavioral Design Patterns The Strategy pattern allows us to extract the common parts of a family of closely related components into a component called the context and allows us to define strategy objects that the context can use to implement specific behaviors. The State pattern is a variation of the Strategy pattern where the strategies are used to model the behavior of a component when under different states. The Template pattern, instead, can be considered the "static" version of the Strategy pattern, where the different specific behaviors are implemented as subclasses of the template class, which models the common parts of the algorithm. The Iterator pattern provides us with a common interface to iterate over a collection. It has now become a core pattern in Node.js. JavaScript offers native support for the pattern (with the iterator and iterable protocols). Iterators can be used as an alternative to complex async iteration patterns and even to Node.js streams. The Middleware pattern allows us to define a modular chain of processing steps. This is a very distinctive pattern born from within the Node.js ecosystem. It can be used to preprocess and postprocess data and requests. The Command pattern materializes the information required to execute a routine, allowing such information to be easily transferred, stored, and processed. The Strategy Pattern The Strategy pattern enables an object, called the context, to support variations in its logic by extracting the variable parts into separate, interchangeable objects called strategies. The context implements the common logic of a family of algorithms, while a strategy implements the mutable parts, allowing the context to adapt its behavior depending on different factors, such as an input value, a system configuration, or user preferences. Strategies are usually part of a family of solutions and all of them implement the same interface expected by the context. The following figure shows the situation we just described: Figure 1: General structure of the Strategy pattern Figure 1 shows you how the context object can plug different strategies into its structure as if they were replaceable parts of a piece of machinery. Imagine a car; its tires can be considered its strategy for adapting to different road conditions. We can fit winter tires to go on snowy roads thanks to their studs, while we can decide to fit high-performance tires for traveling mainly on motorways for a long trip. On the one hand, we don't want to change the entire car for this to be possible, and on the other, we don't want a car with eight wheels so that it can go on every possible road. The Strategy pattern is particularly useful in all those situations where supporting variations in the behavior of a component requires complex conditional logic (lots of if...else or switch statements) or mixing different components of the same family. Imagine an object called Order that represents an online order on an e-commerce website. The object has a method called pay() that, as it says, finalizes the order and transfers the funds from the user to the online store. To support different payment systems, we have a couple of options: Use an ..elsestatement in the pay() method to complete the operation based on the chosen payment option Delegate the logic of the payment to a strategy object that implements the logic for the specific payment gateway selected by the user In the first solution, our Order object cannot support other payment methods unless its code is modified. Also, this can become quite complex when the number of payment options grows. Instead, using the Strategy pattern enables the Order object to support a virtually unlimited number of payment methods and keeps its scope limited to only managing the details of the user, the purchased items, and the relative price while delegating the job of completing the payment to another object. Let's now demonstrate this pattern with a simple, realistic example. Multi-format configuration objects Let's consider an object called Config that holds a set of configuration parameters used by an application, such as the database URL, the listening port of the server, and so on. The Config object should be able to provide a simple interface to access these parameters, but also a way to import and export the configuration using persistent storage, such as a file. We want to be able to support different formats to store the configuration, for example, JSON, INI, or YAML. By applying what we learned about the Strategy pattern, we can immediately identify the variable part of the Config object, which is the functionality that allows us to serialize and deserialize the configuration. This is going to be our strategy. Creating a new module Let's create a new module called config.js, and let's define the generic part of our configuration manager: import { promises as fs } from 'fs' import objectPath from 'object-path' export class Config { constructor (formatStrategy) {                           // (1) this.data = {} this.formatStrategy = formatStrategy } get (configPath) {                                       // (2) return objectPath.get(this.data, configPath) } set (configPath, value) {                                // (2) return objectPath.set(this.data, configPath, value) } async load (filePath) {                                  // (3) console.log(`Deserializing from ${filePath}`) this.data = this.formatStrategy.deserialize( await fs.readFile(filePath, 'utf-8') ) } async save (filePath) {                                  // (3) console.log(`Serializing to ${filePath}`) await fs.writeFile(filePath, this.formatStrategy.serialize(this.data)) } } This is what's happening in the preceding code: In the constructor, we create an instance variable called data to hold the configuration data. Then we also store formatStrategy, which represents the component that we will use to parse and serialize the data. We provide two methods, set()and get(), to access the configuration properties using a dotted path notation (for example, property.subProperty) by leveraging a library called object-path (nodejsdp.link/object-path). The load() and save() methods are where we delegate, respectively, the deserialization and serialization of the data to our strategy. This is where the logic of the Config class is altered based on the formatStrategy passed as an input in the constructor. As we can see, this very simple and neat design allows the Config object to seamlessly support different file formats when loading and saving its data. The best part is that the logic to support those various formats is not hardcoded anywhere, so the Config class can adapt without any modification to virtually any file format, given the right strategy. Creating format Strategies To demonstrate this characteristic, let's now create a couple of format strategies in a file called strategies.js. Let's start with a strategy for parsing and serializing data using the INI file format, which is a widely used configuration format (more info about it here: nodejsdp.link/ini-format). For the task, we will use an npm package called ini (nodejsdp.link/ini): import ini from 'ini' export const iniStrategy = { deserialize: data => ini.parse(data), serialize: data => ini.stringify(data) } Nothing really complicated! Our strategy simply implements the agreed interface, so that it can be used by the Config object. Similarly, the next strategy that we are going to create allows us to support the JSON file format, widely used in JavaScript and in the web development ecosystem in general: export const jsonStrategy = { deserialize: data => JSON.parse(data), serialize: data => JSON.stringify(data, null, '  ') } Now, to show you how everything comes together, let's create a file named index.js, and let's try to load and save a sample configuration using different formats: import { Config } from './config.js' import { jsonStrategy, iniStrategy } from './strategies.js' async function main () { const iniConfig = new Config(iniStrategy) await iniConfig.load('samples/conf.ini') iniConfig.set('book.nodejs', 'design patterns') await iniConfig.save('samples/conf_mod.ini') const jsonConfig = new Config(jsonStrategy) await jsonConfig.load('samples/conf.json') jsonConfig.set('book.nodejs', 'design patterns') await jsonConfig.save('samples/conf_mod.json') } main() Our test module reveals the core properties of the Strategy pattern. We defined only one Config class, which implements the common parts of our configuration manager, then, by using different strategies for serializing and deserializing data, we created different Config class instances supporting different file formats. The example we've just seen shows us only one of the possible alternatives that we had for selecting a strategy. Other valid approaches might have been the following: Creating two different strategy families: One for the deserialization and the other for the serialization. This would have allowed reading from a format and saving to another. Dynamically selecting the strategy: Depending on the extension of the file provided; the Config object could have maintained a map extension → strategy and used it to select the right algorithm for the given extension. As we can see, we have several options for selecting the strategy to use, and the right one only depends on your requirements and the tradeoff in terms of features and the simplicity you want to obtain. Furthermore, the implementation of the pattern itself can vary a lot as well. For example, in its simplest form, the context and the strategy can both be simple functions: function context(strategy) {...} Even though this may seem insignificant, it should not be underestimated in a programming language such as JavaScript, where functions are first-class citizens and used as much as fully-fledged objects. Between all these variations, though, what does not change is the idea behind the pattern; as always, the implementation can slightly change but the core concepts that drive the pattern are always the same. Summary In this article, we dive deep into the details of the strategy pattern, one of the Behavioral Design Patterns in Node.js. Learn more in the book, Node.js Design Patterns, Third Edition by Mario Casciaro and Luciano Mammino. About the Authors Mario Casciaro is a software engineer and entrepreneur. Mario worked at IBM for a number of years, first in Rome, then in Dublin Software Lab. He currently splits his time between Var7 Technologies-his own software company-and his role as lead engineer at D4H Technologies where he creates software for emergency response teams. Luciano Mammino wrote his first line of code at the age of 12 on his father's old i386. Since then he has never stopped coding. He is currently working at FabFitFun as principal software engineer where he builds microservices to serve millions of users every day.
Read more
  • 0
  • 0
  • 51059
Modal Close icon
Modal Close icon