Top 12 VS Code Extensions

Visual Studio Code Extensions make your workflow so much more efficient and enjoyable in VS Code. This article provides a list of some of my favorite VS Code extensions. There is no ordering from best to worst.

1. Auto Rename Tag

If you change an HTML/XML tag, this extension will automatically update the paired tag.

2. Bracket Pair Colorizer

This extensions’ intuitive colorizing of matching brackets (and parentheses) makes it easier to see where a block starts and ends

3. Live Server

This extension launches a local server with live reload. This extension is great for updating static HTML/CSS/JS files quickly without creating your own server.

4. Prettier

Have you been formatting your code manually? No more. This extension will format your code for you on save.

5. Live Share

Do you want to work on a project with a friend or teammate? This extension allows you to collaborate on a project in VS Code in real-time. Think Google Docs but for code!

6. Quokka.js

This extension allows you to test out JavaScript and TypeScript code on the fly in an in-editor playground.

7. Import Cost

When you import a package into a file in VS Code, this extension will show you how much memory that package takes up.

8. GitLens

This extension allows you to visualize code authorship via Git blame annotations and code lens. You can also navigate in the history of a file back and forth to see the changes that were made on it.

9. Path Intellisense

This extension will auto-complete filenames and file paths for you as you type them.

10. Snippets

This is not a single extension but a collection of extensions that provide code snippets for specific languages and frameworks. Using snippets allows you to not have to type as much. The example I’m showing here is the ES7 React snippets for React developers.

11. Better Comments

This extension augments your ability to create more human-friendly comments in your code. You can categorize your annotations with different colors.

12. VS Code Icons

This extension adds icons to your files in the sidebar so it is easier to tell which is which when there is a lot of files.

Go check out all of these great extensions!

Flutter Error: The argument type ‘String’ can’t be assigned to the parameter type ‘Uri’

The Error

If you are using a string URI when dealing with the http package in Flutter, you may be seeing this error:

The Error
The argument type 'String' can't be assigned to the parameter type 'Uri' at .... (argument_type_not_assignable)

This error is due to an update in the package.

The Solution

Parse the String to be an explicit URI by using the Uri.parse() method:

http.get(yourString) becomes http.get(Uri.parse(yourString)) becomes

Here is it in an example:

String dataURL = "";
http.Response response = await http.get(Uri.parse(dataURL));

To improve compile-time type safety, the http package (version 0.13.0) introduced changes that made all functions that previously accepted Uris or Strings now accept only Uris instead.

You will need to explicitly use Uri.parse to convert Strings to Uris. In the previous version, the http packaged called that for you behind the scenes.

How to Use MediaStreams in React

Do you need to access the user’s webcam for video chat or the user’s microphone for a recording? In this simple tutorial, I’ll show you how to access and use the computer’s media inputs with React.

The MediaDevices interface provides access to connected media input devices like cameras and microphones.

Get access to user’s media input

After getting the user’s permission, the MediaDevices.getUserMedia() method produces a MediaStream. This stream can have multiple tracks containing the requested types of media. Examples of tracks include video and audio tracks.

The getUserMedia() method takes in a constraints parameter with two members: video and audio, describing any configuration settings for the tracks. It returns a Promise that resolves to a MediaStream object. You can set your video element’s src to the stream.

// get the user's media stream
    const startStream = async () => {
        let newStream = await navigator.mediaDevices
            video: true,
            audio: true,
          .then((newStream) => {
            webcamVideo.current.srcObject = newStream;


Here are some examples of preferences that you can customize in the stream:

// Requests default video and audio
{ audio: true, video: true }

// Requests video with a preference for 1280x720 camera resolution. No audio
  audio: false,
  video: { width: 1280, height: 720 }

// Requires minimum resolution of 1280x720
  audio: true,
  video: {
    width: { min: 1280 },
    height: { min: 720 }

// Browser will try to get as close to ideal as possible
  audio: true,
  video: {
    width: { min: 1024, ideal: 1280, max: 1920 },
    height: { min: 576, ideal: 720, max: 1080 }

// On mobile, browser will prefer front camera 
{ audio: true, video: { facingMode: "user" } }

// On mobile, browser will prefer rear camera
{ audio: true, video: { facingMode: { exact: "environment" } } }

Save user’s media stream in a variable

After you get the user’s media stream from .getUserMedia(), you should save the stream in a state variable. This is so that you can manipulate the stream later (to stop it, get a track from it, etc.)

For example, if you want to stop the stream, get all of the stream’s tracks using the MediaStream.getTracks() method and call the .stop() method on them.

If you want to access the audio separately, use the MediaStream.getAudioTracks() method. To access video separately, use MediaStream.getVideoTracks().

You should also have state that controls if media input is on or off. You should use the useRef hook to control the video element in the DOM.

This is the final code:

import React, { useState, useRef } from 'react';

const App = () => {
    // controls if media input is on or off
    const [playing, setPlaying] = useState(false);

    // controls the current stream value
    const [stream, setStream] = useState(null);
    // controls if audio/video is on or off (seperately from each other)
    const [audio, setAudio] = useState(true);
    const [video, setVideo] = useState(true);

    // controls the video DOM element
    const webcamVideo = useRef();

    // get the user's media stream
    const startStream = async () => {
        let newStream = await navigator.mediaDevices
            video: true,
            audio: true,
          .then((newStream) => {
            webcamVideo.current.srcObject = newStream;


    // stops the user's media stream
    const stopStream = () => {
        stream.getTracks().forEach((track) => track.stop());

    // enable/disable audio tracks in the media stream
    const toggleAudio = () => {
        stream.getAudioTracks()[0].enabled = audio;

    // enable/disable video tracks in the media stream
    const toggleVideo = () => {
       stream.getVideoTracks()[0].enabled = !video;

    return (
      <div className="container">
	 <video ref={localVideo} autoPlay playsInline></video>
	    onClick={playing ? stopStream : startStream}>
	    Start webcam

	 <button onClick={toggleAudio}>Toggle Sound</button>
	 <button onClick={toggleVideo}>Toggle Video</button>

export default App;

How to Embed Google Form in Website

This is how you embed a Google Form in an HTML page so that users can interact with the form directly in the page rather than having to press a link and navigate to the form in a separate window.

In the Google Form page, press the “Send” button at the top right

On the “Send form” popup, navigate to the Embed HTML tab.

Update the width and height of the form to fit the window you are going to put it in

Copy the iframe code

Paste it into the code for the HTML page you are putting the form in.


<h1>This is a form</h1>

<iframe src="" width="640" height="1395" frameborder="0" marginheight="0" marginwidth="0">Loading…</iframe>