Mastering Golang: Concurrency and Parallelism

Golang What is Concurrency and Parallelism

Concurrent Operations and Parallel Execution!

Welcome to the world of Golang! Concurrency and parallelism are key features of Go that enable efficient utilization of system resources. This guide explores the principles, patterns, and best practices for implementing concurrent and parallel operations in Go.

It’s important to distinguish between concurrency and parallelism:

Concurrency

Concurrency is a design pattern and programming approach that deals with the composition of independently executing tasks. In Go, concurrency is primarily achieved through goroutines and channels.

  • Goroutines: Goroutines are lightweight threads managed by the Go runtime. They are designed to be efficient and allow you to perform tasks concurrently. You can start a new goroutine using the go keyword, and they share the same memory space, making it easy to communicate between them.

  • Channels: Channels are used for communication and synchronization between goroutines. They allow goroutines to send and receive data, ensuring safe concurrent access to shared resources.

Concurrency in Go helps you write programs that are more responsive and efficient by allowing tasks to execute independently without the need for explicit thread management.

Parallelism

Parallelism, on the other hand, is the execution of multiple tasks at the exact same time, typically using multiple CPU cores. Go can also achieve parallelism through goroutines when tasks are inherently parallelizable, but it’s not limited to that. Go provides a runtime scheduler that can automatically distribute goroutines across multiple CPU cores when possible, achieving parallel execution.

Golang Concurrency Code Example

Here’s a simple example of concurrency in Go using goroutines and channels:

package main

import (
    "fmt"
    "time"
)

func worker(id int, jobs <-chan int, results chan<- int) {
    for job := range jobs {
        fmt.Printf("Worker %d started job %d\n", id, job)
        time.Sleep(time.Second) // Simulate some work
        fmt.Printf("Worker %d finished job %d\n", id, job)
        results <- job * 2
    }
}

func main() {
    numJobs := 5
    jobs := make(chan int, numJobs)
    results := make(chan int, numJobs)

    // Start three worker goroutines
    for i := 1; i <= 3; i++ {
        go worker(i, jobs, results)
    }

    // Send jobs to the workers
    for j := 1; j <= numJobs; j++ {
        jobs <- j
    }
    close(jobs)

    // Collect results from the workers
    for k := 1; k <= numJobs; k++ {
        result := <-results
        fmt.Printf("Result: %d\n", result)
    }
}

In this example, we have three worker goroutines running concurrently, processing jobs sent via a channel. The program demonstrates a simple form of concurrency in Go.

Golang Parallelism Code Example

Here’s an example to illustrate parallelism in Go:

package main

import (
	"fmt"
	"runtime"
	"sync"
	"time"
)

func calculateSquare(id int, num int, wg *sync.WaitGroup) {
	defer wg.Done() // Decrement the counter when the goroutine completes

	result := num * num
	fmt.Printf("Goroutine %d: %d * %d = %d\n", id, num, num, result)
}

func main() {
	// Set the number of CPU cores to utilize (optional but recommended)
	numCores := runtime.NumCPU()
	runtime.GOMAXPROCS(numCores)

	// Create a WaitGroup to wait for all goroutines to finish
	var wg sync.WaitGroup

	numTasks := 4

	// Launch goroutines to calculate squares in parallel
	for i := 1; i <= numTasks; i++ {
		wg.Add(1) // Increment the WaitGroup counter for each goroutine
		go calculateSquare(i, i, &wg)
	}

	// Wait for all goroutines to finish
	wg.Wait()

	fmt.Println("All goroutines have completed.")
}

In this example, we do the following:

  • We use runtime.GOMAXPROCS() to set the number of CPU cores to utilize. This helps maximize parallelism by utilizing all available CPU cores effectively. This is optional but recommended for parallelism.

  • We create a sync.WaitGroup named wg to wait for all goroutines to finish.

  • We launch several goroutines using a loop, each of which calculates the square of a number concurrently. Each goroutine increments the WaitGroup counter using wg.Add(1) and decrements it when it’s done with defer wg.Done().

  • After launching all goroutines, we use wg.Wait() to block the main program until all goroutines have completed. This ensures that we don’t exit the program prematurely.

By running this code, you’ll see that the goroutines execute in parallel, each calculating the square of a number concurrently. The use of runtime.GOMAXPROCS() helps ensure that all CPU cores are utilized for maximum parallelism.

Conclusion

Concurrency and parallelism are powerful features of Go that enable scalable and efficient applications. Mastering these concepts empowers developers to build high-performance and concurrent software systems.

That’s All Folks!

You can find all of our Golang guides here: A Comprehensive Guide to Golang

Luke Barber

Hello, fellow tech enthusiasts! I'm Luke, a passionate learner and explorer in the vast realms of technology. Welcome to my digital space where I share the insights and adventures gained from my journey into the fascinating worlds of Arduino, Python, Linux, Ethical Hacking, and beyond. Armed with qualifications including CompTIA A+, Sec+, Cisco CCNA, Unix/Linux and Bash Shell Scripting, JavaScript Application Programming, Python Programming and Ethical Hacking, I thrive in the ever-evolving landscape of coding, computers, and networks. As a tech enthusiast, I'm on a mission to simplify the complexities of technology through my blogs, offering a glimpse into the marvels of Arduino, Python, Linux, and Ethical Hacking techniques. Whether you're a fellow coder or a curious mind, I invite you to join me on this journey of continuous learning and discovery.

Leave a Reply

Your email address will not be published. Required fields are marked *

Verified by MonsterInsights