A vectorization is basically the art of getting rid of explicit for loops whenever possible. With the help of vectorization, operations are applied to whole arrays instead of individual elements.
The rule of thumb to remember is to avoid using explicit loops in your code.
Deep learning algorithms tend to shine when trained on large datasets, so it’s important that your code runs quickly. Otherwise, your code might take a long time to get your result.
Vectorization in Python
Vectorization is a technique of implementing array operations without using for loops. Instead, we use functions defined by various modules which are highly optimized that reduces the running and execution time of code. Vectorized array operations will be faster than their pure Python equivalents, with the biggest impact in any kind of numerical computations.
Python for-loops are slower than their C/C++ counterpart. Python is an interpreted language and most of the implementation is slow. The main reason for this slow computation comes down to the dynamic nature of Python and the lack of compiler level optimizations which incur memory overheads. NumPy being a C implementation of arrays in Python provides vectorized actions on NumPy arrays.

Vectorized Operations using NumPy
- Add/Subtract/Multiply/Divide by Scalar
Addition, Subtraction, Multiplication, and Division of an array by a scalar quantity result in an array of the same dimensions while updating all the elements of the array with a given scalar. We apply this operation just like we do with variables. The code is both small and fast as compared to for-loop implementation.
To calculate the execution time, we will use Timer class present in timeit module which takes the statement to execute, and then call timeit() method that takes how many times to repeat the statement. Note that the output computation time is not exactly the same always and depends on the hardware and other factors.
2. Sum and Max of array
For finding the sum and maximum element in an array, we can use for loop as well as python built-in methods sum() and max() respectively.
3. Dot product
Also known as Inner Product, the dot product of two vectors is an algebraic operation that takes two vectors of the same length and returns a single scalar quantity. It is calculated as a sum of the element-wise product of both vectors. In terms of a matrix, given 2 matrices a and b of size nx1, the dot product is done by taking the transpose of the first matrix and then mathematical matrix multiplication of aT(transpose of a) and b.
Broadcasting
The term broadcasting describes how NumPy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. There are, however, cases where broadcasting is a bad idea because it leads to inefficient use of memory that slows computation.
NumPy operations are usually done on pairs of arrays on an element-by-element basis. In the simplest case, the two arrays must have exactly the same shape
•Why Vectorization is important?
•A lot of scaleable deep learning implementations are done on a GPU or a graphics processing unit
•Both GPU and CPU have parallelization instructions.
They’re sometimes called SIMD instructions. This stands for a single instruction multiple data.
But what this basically means is that, if you use built-in functions such as this np.function or other functions that don’t require you explicitly implementing a for loop.
Conclusion
Vectorization and Broadcasting are ways to speed up the compute time and optimize memory usage while doing mathematical operations with Numpy. These methods are crucial to ensure time complexity is reduced so that the algorithms don’t face any bottlenecks. This optimized operation is necessary for applications to be scalable.