Electron microscopy
 
Python Automation and Machine Learning for ICs: Chapter M
- Python Automation and Machine Learning for ICs -
- An Online Book: Python Automation and Machine Learning for ICs by Yougui Liao -
Python Automation and Machine Learning for ICs                                                         http://www.globalsino.com/ICs/        


Table of Contents/Index 
Chapter/Index: Introduction | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Appendix

=================================================================================

 
   
 
                                                       
K-means clustering and PCA for failure analysis Introduction
ML model complexity versus dataset size Introduction
Feature selection: removing multicollinearity Introduction
Memory resources in Apache Spark applications Introduction
Clusters (Kubernetes, Apache Mesos, Spark Standalone, Apache Hadoop YARN) in Apache Spark Introduction
Leveraging precision, speed, and automation: Integrating Mask R-CNN and YOLOv8 Introduction
Mask R-CNN (Mask Region-based Convolutional Neural Network)  Introduction
Evaluating a ML model with BigQuery ML Introduction
Building an effective machine learning team Introduction
Managing machine learning (ML) projects Introduction
Personalizing applications with ML Introduction
Identifying the business value of using ML Introduction
Managing ML projects with Google Cloud Introduction
Platform Security Engineering (PSE) and ML Introduction
Comparisons among SparkML, MLlib, and AutoML Introduction
Hadoop MapReduce used by Google, Netflix, Amazon and ML Introduction
Hadoop MapReduce Introduction
Comparison between Apache Spark's MLlib and Python Introduction
MLlib (Machine Learning Library) Introduction
Covariance versus Covariance Matrix Introduction
Principal Component Analysis (PCA) versus Uniform Manifold Approximation and Projection (UMAP) Introduction
Uniform Manifold Approximation and Projection (UMAP) Introduction
Machine learning versus data science Introduction
Martin Zinkevich's "Rule of Machine Learning": dataset quality Introduction
Performance metrics Introduction
BigQuery ML Introduction
tf.keras.datasets (e.g. MNIST, CIFAR-10, CIFAR-100, Fashion MNIST) Introduction
Max-pooling Introduction
Default mutable argument Introduction
Mistakes that beginner machine learning (ML) students often make Introduction
Labor cost of data analysis with and without automation and ML techniques Introduction
Trade-off between minimizing loss and minimizing complexity Introduction
L1 Loss (Absolute Loss or Mean Absolute Error (MAE)) Introduction
Correlations/similarity/dissimilarity/pair/match of two columns in csv data Introduction
Virtual reality (VR), augmented reality (AR), and mixed reality (MR) Introduction
Maintaining arc-consistency Introduction
Precision, Recall, False Positive Rate, and False Negative Rate (Miss Rate or False Negative Proportion) Introduction
Sensor Model Introduction
Hidden Markov Model (HMM) Introduction
Markov chain Introduction
Markov assumption Introduction
Sampling Methods for Approximate Inference Introduction
Model Checking Algorithms and Modus Ponens Algorithms Introduction
Modus ponens (a logical inference rule) Introduction
Model checking Introduction
Manhattan distance Introduction
Save dynamic graph as a movie/video or split a movie to image frames Introduction
Transition model Introduction
POMDP (Partially Observable Markov Decision Process) Introduction
Optimal value function in Markov Decision Process (MDP) Introduction
Stationary and Non-Stationary State Transitions in Markov Decision Process (MDP) Introduction
Finite-horizon MDP (Markov Decision Process) Introduction
State-action rewards in Markov Decision Process (MDP) Introduction
Regularization techniques for decision trees Introduction
Blackbox optimization algorithms Introduction
Comparison among Grid Search, Bayesian Optimization, Random Search and Manual Search Introduction
Softmax regression (multinomial logistic regression)/softmax multi-class network/softmax classifier Introduction
Perceptron algorithm Introduction
Bandwidth parameter (τ) in LWR and KDE Introduction
Hidden Markov Models (HMMs) Introduction
Color in Table obtained by matplotlib.pyplot Introduction
Mean squared error (MSE) (L2 loss function, Euclidean loss) and root mean squared error (RMSE) Introduction
Mirror/reflect image from left to right/from top to bottom Introduction
AutoML Introduction
Hide/turn on/off axes/axis on matplotlib Introduction
Examples of matplotlib (image/data) visualizations Introduction
Median blurring and cv2.medianBlur() Introduction
Extract the least/most frequency/duplicate/occurrence element in a list Introduction
Recommender systems based on machine learning Introduction
k-means algorithm Introduction
Locate/find the center/coordinates of a bright (maximum/highest intensity) spot in an image Introduction
model_fn Introduction
Microsoft Teams Introduction
Long short-term memory (LSTM) Introduction
Check whether or not a cell value in a column of a CSV file matchs a value in a column of another CSV file, then do something: e.g. add a value to another column of a csv file  Introduction
Create a function called main() to contain the code you want to run code
Call other functions from main() code
Match on images to find and to highlight unsimilar (threshold=0) to identical (threshold=1) regions of an image that match a template with a cross-correlation method code
Call and then run your own functions and modules in different/other Python files Introduction
Model Subclassing to create a Keras model with TensorFlow Introduction
.norm() (Taxicab Norm, Manhattan Norm, Euclidian Norm and Vector Max Norm) Introduction
Enlarge a window to maximum in size Introduction
Activation functions in machine learning Introduction
Launch file menu Introduction
Launch help menu Introduction
Launch Replace menu from Find menu Introduction
Open task manager window Introduction
Move the active window to make space for other apps Introduction
Work (read, write, and merge and unmerge cells) in Excel sheets Introduction
Work (read, write, insert and delete rows and columns, and merge and unmerge cells, shift/move cell values) in Excel sheets Introduction
Minimize/maximize/restore/activate/resize/move/close Window objects Introduction
Get the name of the current/most front window Introduction
Bring/activate an application/window to most front/foreground Introduction
Bind/link multiple commands to buttons Introduction
Bind Python functions and methods to events (similar to if loops) Introduction
(Single and multiple) selection between choices or options Introduction
Copy and then store it into memory and it can be pasted for use later Introduction
Get the latest/newest/most recent file in a folder Introduction
tf.keras.model.save() Introduction
Accuracy in machine learning process Introduction
Speed in machine learning process Introduction
Thresholding with Match Template: Match on images to find and to highlight unsimilar (threshold=0) to identical (threshold=1) regions of an image that match a template with a cross-correlation method Introduction
Modify/replace the line in a text file if a line contains specific string Introduction
   
   
   
Machine learning and its techniques Introduction
Machine learning algorithms Introduction
Core Steps/Procedure/Designing of Machine Learning Introduction
Concepts  
  Symbols/notations Introduction
  Input space Introduction
    X "Label space" (X, y) Introduction
  Loss (risk, cost, objective) function Introduction
  Predicted label Introduction
  True label (observed label) Introduction
  Predicted label versus predictor (feature) Introduction
  Various names or terms that describe similar concepts or techniques in ML Introduction
  Excess risk Introduction
  Cross entropy Introduction
  Empirical Risk Minimization (ERM) Introduction
  Predicted values (ŷ) Introduction
  Empericial loss versus population loss Introduction
  Uniform convergence Introduction
  Optimizer Introduction
  Metrics to monitor during training and testing Introduction
  Probabilistic model Introduction
  Covariance matrix Introduction
  Linear Discriminant Analysis Introduction
  Nonasymptotic versus asymptotic analysis Introduction
    X Asymptotic analysis Introduction
    X Nonasymptotic Analysis Introduction
  Epochs and sample size Introduction
    X Epoch Introduction
  Bound Introduction
    X Probability bounds analysis (PBA) Introduction
  Sample Size versus Bounds Introduction
  Sample Mean Introduction
  True Mean Introduction
  Deviation Threshold Introduction
  Actual Probability of Deviation Introduction
  Deviation Probability (Hoeffding Bound) Introduction
  Validation Introduction
  Brute force discretization Introduction
  Lipschitzness/Lipschitz continuity Introduction
  Generalization error Introduction
  Generalization Error/Generalization Loss/Test Error Introduction
  Discretization error Introduction
  Big O notation Introduction
  Threading Introduction
  Binary trees Introduction
  Eigenvectors/eigenvalues Introduction
  Convex optimization, convex functions and convex sets Introduction
  Cocktail party problem Introduction
  Linear Regression Introduction
    X Multiple linear regression Introduction
  Learning Algorithm (estimator) Introduction
  Input data (sample and feature) (multiple sample/example) Introduction
    X Feature analysis/feature importance analysis/weight of feature Introduction
    X Feature importance for Multinomial Naive Bayes algorithm Introduction
    X Feature extractions from wafers Introduction
      Feature extraction using radon transform Introduction
    X Categorical features preprocessing layers Introduction
    X Training Exmaple (x, y) Introduction
    X feature_extraction.text (code). (code). (code)
    X Feature ingestion Introduction
    X Vertex AI Feature Store Introduction
      tf.feature_column.bucketized_column Introduction
      tf.feature_column.categorical_column_with_identity Introduction
    X Feature and feature vector/Featurization Introduction
    X Feature selection Introduction
      Forward Search Introduction
    X Outlier of feature (code)
  Parameterized family and model parameters Introduction
  Output (target variable, y, Y) Introduction
  Learning rate Introduction
  Iterative algorithms Introduction
    X Gradient descent algorithm (for updating θ) Introduction
    X Batch gradient descent Introduction
    X Stochastic gradient descent (SGD) Introduction
  Algorithms for directly finding the global optimum Introduction
    X Direct optimization Introduction
    X Global optimization and global minimum Introduction
  Trace of a square matrix Introduction
  Transpose of vector and matrix Introduction
  Parametric learning algorithm Introduction
  Non-parametric learning algorithm Introduction
  Kernel density estimation (KDE) Introduction
  Underfitting Introduction
  Convolutional neural networks (CNNs) Introduction
    X Convolutional layers Introduction
  Comparison between mean squared error (MSE), absolute error (L1 Loss) and fourth-power loss Introduction
  Comparison between L1 Regularization and L1 Loss (absolute loss or mean absolute error (MAE)) Introduction
  Likelihood and maximum likelihood estimation (MLE) Introduction
  Linear regression versus classification Introduction
  Logistic regression Introduction
  Logistic regression versus linear regression Introduction
  Newton's method Introduction
  Newton's method versus gradient descent Introduction
  Perceptron algorithm and logistic regression Introduction
  Probability density function (PDF): comparisons between (normal (gaussian) distribution, uniform distribution, exponential distribution and poisson distribution) Introduction
  Parameters, features and examples Introduction
  Gaussian distribution and standard gaussian distribution (multivariate normal distribution) Introduction
  Exponential Family: Parameter, Sufficient Statistic, Natural Parameter, Base Measure and Log-Partition Function (Bernoulli distribution and Gaussian distribution) Introduction
  Negative log likelihood (NLL) Introduction
  GLM (Generalized Linear Model) Introduction
  Bayesian Probability, Bayesian Statistics (Distribution Over a Distribution), versus Bayesian Inference Introduction
  Learning rule Introduction
  StatsModels Introduction
  Canonical response function/canonical link function Introduction
  Parameterizations Introduction
  Hyperplane/decision boundary Introduction
  Discriminative algorithms/discriminative models Introduction
  Artificial Neural Networks (ANNs) Introduction
  Generative learning models Introduction
  Gaussian Discriminant Analysis (GDA) Introduction
  Discriminative algorithms versus generative models Introduction
  Bernoulli distribution Introduction
  Training set Introduction
  Joint likelihood Introduction
  Single parameter estimation versus multiple parameter estimation Introduction
  History/hot topics of ML Introduction
  Logistic regression versus Gaussian discriminant analysis Introduction
  Quantum machine learning Introduction
  Comparisons among artificial intelligence (AI), machine learning (ML) and quantum machine learning (QML) Introduction
  Comparison between Poisson distribution, Gaussian (normal) distribution and logistic regression Introduction
  Pipelines in ML Introduction
  Categorical distribution Introduction
  Posterior probability and prior probability Introduction
  Indicator function Introduction
  Conferences on machine learning Introduction
  Laplace smoothing/Laplace correction/add-one smoothing Introduction
  (Single) Naive Bayes/Gaussian Naive Bayes Introduction
  Single Naive Bayes (Gaussian Naive Bayes) versus Multinomial Naive Bayes Introduction
  Feature vector and number of features Introduction
  Analysis of papers/publications/literature in machine learning and Python applications Introduction
  Fully Connected Layers (FC) in Deep Learning Introduction
  Convolutional Layers (CONV) in Deep Learning Introduction
  Hidden layer in deep learning neural network Introduction
  Energy consumption in computation of machine learning Introduction
  DRAM applications and challenges in machine learning Introduction
  Optimization of energy efficiency in machine learning systems Introduction
  Custom AI/ML chips/ICs Introduction
  Multivariate Bernoulli learning model Introduction
  Multinomial Event Model Introduction
  Optimal margin classifier/maximum margin separator Introduction
  Functional margin Introduction
  Geometric margin Introduction
  Support Vector Machines (SVM) and Logistic Regression Introduction
  Comparison among classifier, hyperplane and decision boundary Introduction
  Geometric Margin versus Functional Margin Introduction
  Representer theorem and its derivation
Introduction
  L2 regularization/Ridge/ridge regularization/Tikhonov regularization Introduction
  Mathematical equations, formulas and inequalities used in machine learning Introduction
  Kernel tricks and kernel function Introduction
  Soft margin versus hard margin Introduction
  Cross-validation Introduction
  Logistic regression and Naive Bayes Introduction
  "Norm" of parameters, and L1 Norm (Manhattan Norm) and L2 Norm (Euclidean Norm) Introduction
  Frequentist approach versus Bayesian approach Introduction
  Maximum A Posteriori (MAP) Introduction
  Mean Average Precision (MAP) Introduction
  Minimum A Priori (MAP) Introduction
  Training error versus model complexity Introduction
  Training score/training error Introduction
  Choice of parameters for training models Introduction
  Splitting a training dataset into different subsets Introduction
  Polynomial models Introduction
  K-Fold Cross-Validation Introduction
  Leave-One-Out Cross-Validation (LOOCV) Introduction
  Standard hold-out validation Introduction
  Updating Hypothesis (ĥ) and/or Parameter θ^ Introduction
  Deterministic function Introduction
  Statistical efficiency Introduction
  Hyperparameter tuning (model tuning) Introduction
  Validation error Introduction
  Bayes error/Bayes risk/Bayes rate/irreducible error Introduction
  True Function Introduction
  Linear model versus polynomial model Introduction
  Error excess Introduction
  Generalization risk/generalization error versus empirical risk Introduction
  Misclassification loss in decision trees Introduction
  Gini Loss Introduction
  Weight space Introduction
  Batch sizes Introduction
  Data parallelism in distributed training Introduction
  Pipeline Introduction
  Comparisons among Manual Search, Vertex Vizier, AutoML and Early stopping on google cloud Introduction
  Bayesian optimization Introduction
  Additive structure/additive model Introduction
  Ensemble of decision trees Introduction
  Ensembling Introduction
  Decorrelating models Introduction
  Boosting Introduction
  Boosting versus Bagging Introduction
  AdaBoost (Adaptive Boosting) Model Introduction
  Neuron = linear + activation Introduction
  Model = architecture + parameters Introduction
  Batch Gradient Descent (BGD), Stochastic Gradient Descent (SGD), Mini-Batch Gradient Descent, Batch Stochastic Gradient Descent, Momentum, (Adagrad, Adadelta, RMSprop), and Adam (Adaptive Moment Estimation) Introduction
  Vectorization Introduction
  Momentum algorithm Introduction
  Example of ML debugging: Anti-Spam Introduction
  Experiences of developing machine learning algorithms Introduction
  Time of training a ML algorithm Introduction
  Weighted accuracy in ML Introduction
  Learning Algorithm and Pipeline Introduction
  Mixture of Gaussians (MoG) Introduction
  Expectation-Maximization (EM) algorithm working in Gaussian Mixture Models (GMMs) Introduction
  Expectation-Maximization (EM) algorithm Introduction
  Impact of ML on ICs (integrated circuits) Introduction
  Mixture of Gaussians (MoG) versus Factor Analysis (FA) Introduction
  Maximum Likelihood Estimation (MLE) of single Gaussian (normal) distribution Introduction
  Latent features and latent variables Introduction
  Markov Decision Process (MDP) Introduction
  Robotics and machine learning Introduction
  Open datasets, and open-source tools and libraries for ML practice Introduction
  Intrinsic motivation in ML Introduction
  Model-Free RL and Model-based RL (reinforcement learning) Introduction
   
Difference between estimation and approximation errors Introduction
  Approximation error Introduction
  Estimation error Introduction
   
Distribution  
  True Distribution Introduction
  Population Distribution Introduction
  Sample Distribution Introduction
  Distribution of θ (parameter distribution) Introduction
  Posterior distribution Introduction
   
Learning theory Introduction
  Generalization Introduction
  Bias and variance, and bias-variance trade-off in ML Introduction
  Model Complexity Introduction
  Convergence and Optimization Introduction
  Sample Complexity Introduction
  Probably Approximately Correct (PAC) learning Introduction
  Margin Theory Introduction
  No Free Lunch Theorems Introduction
  Practice ML projects for beginners Introduction
   
Hypothesis (predicted output (h(x))) Introduction
Finite Hypothesis Class versus Infinite Hypothesis Class Introduction
  Finite Hypothesis Class/finite Hypothesis Analysis Introduction
  Infinite Hypothesis Class Introduction
Hypothesis space/model space/search space Introduction
"Model"versus "hypothesis" Introduction
Hypothesis class/hypothesis family/predictor class/model class/hypothesis family/predictor family/model family (h) Introduction
   
Training process in ML (with "best"-option table) Introduction
Typical training setup in AI and comparisons of different training libraries Introduction
Text/keyword classification/sort/prediction, training/test e.g. Youtube spam Introduction
Empirical loss/training loss Introduction
Train/Test versus Model Accuracy Introduction
     
ML workflow Introduction
  Training  
    X Dataset and data preparation Introduction
      Load raw data (number, category, text, image, video, etc) Introduction
        # 3 ways to create a Keras model with TensorFlow  
          Sequential API to create a Keras model with TensorFlow Introduction
          Functional API to create a Keras model with TensorFlow Introduction
          Model Subclassing to create a Keras model with TensorFlow Introduction
        # Labeling in supervised machine learning Introduction
    X Build input pipeline Introduction
        # tf.data.Dataset Introduction
          tf.data.TextLineDataset() (Code)
          f.data.TFRecordDataset() (Code)
          tf.data.Dataset.from_tensor_slices (Code)
          tf.data.FixedLengthRecordDataset (Code)
      Data ingestion Introduction
        # Data and information visualization Introduction
        # Data processing  
        # Data storage  
        # Data security  
    X    
      Data cleaning  
        # Analysis Introduction
        # Remove irrelevant observations  
        # Cleaning missing/incomplete data Introduction
        # Identify outliers  
        # Fix structural errors  
        # Data validation  
      Preprocessing: Keras preprocessing layers Introduction
        # Normalization  
          tf.keras.layers.normalization  
        # Transformation Introduction
        # Validation Introduction
        # Feature vector (extract features/featurization) Introduction
          Features preprocessing (e.g. Keras for mapping from columns in the CSV to features) Chapter T
    X Machine learning Introduction
      Build and train model Introduction
        # Train model Introduction
          train_and_evaluate Introduction
        # Tracking Introduction
        # Model analysis and validation/evaluating  
          train_and_evaluate Introduction
          Train-dev-test split (training-validation-testing split: Ratio for splitting dataset into training, validation and test sets Introduction
  (II) Deploying and predicting Introduction
    X Inputs  
    X Trained model and automatic model selection Introduction
    X Compile model Introduction
    X Predictive model (use model) Introduction
      Comparison of regression classes Introduction
        # .fit()/.predict()  
    X Batch scoring and model feedback to data preparation Introduction
Natural Language Processing (NLP) Introduction
  Keyword extraction methods from documents in Natural Language Processing (NLP) Introduction
    X Rake_NLTK Introduction
    X Word Cloud Introduction
    X YAKE (Yet Another Keyword Extractor) Introduction
    X Spacy Introduction
    X Textrank Introduction
    X Linear Support Vector Classifier (Linear SVC) Introduction
   
Classification Introduction
  Binary classifiers Introduction
    X Text classification/sort/prediction, train/test e.g. Youtube spam Introduction
  Multi-class classifiers Introduction
   
Supervised learning Introduction
  Decision tree learning Introduction
  Types of predictions with Supervised Learning Introduction
Unsupervised learning Introduction
  Clustering Introduction
     
Reinforcement learning Introduction
Non-linearity in machine learning Introduction
Machine learning for few things Introduction
Machine learning example step-by-step (prediction of house price) Introduction
Machine learning example step-by-step (wafer fail analysis) Introduction
   
Machine learning in yield analysis in semiconductor manufacturing Introduction
Feature extraction using radon transform Introduction
Wafer map similarity ranking (WMSR) Introduction
Main reasons of a surge in ML usage across all industries recently but not earlier Introduction
Defect Detection and Classification Introduction
Mouse clicks (code)
Scroll mouse Introduction
Drag mouse Introduction
Move mouse Introduction
Mouse right-click Introduction
Mouse left-click Introduction
Double click of mouse Introduction
Left click a specific position Introduction
Right click a specific position Introduction
Double click a specific position Introduction
displayMousePosition() (code)
Get mouse position/coordinates on click Introduction
Take a screenshot using a mouse click and drag method Introduction
Get pixel location/coordinates on an image using mouse click/events Introduction
Turn on and off with mouse press or a process Introduction
from pynput.mouse import Button (code)
from pynput.mouse import Controller (code)
Positions and colors of mouse/cursor and features Introduction
Move the cursor/mouse to the found, similar spots one-by-one Introduction
Move the mouse/cursor to the left or right Introduction
.mouseUp(): Move the mouse and then release it. .mouseUp(x=moveToX, y=moveToY, button='left'). .click() function is just a convenient wrapper around these two .mouseDown() and .mouseUp() function calls. (code)
.mouseDown(): Move the mouse and then release it. .mouseDown(x=moveToX, y=moveToY, button='left'). .click() function is just a convenient wrapper around these two .mouseDown() and .mouseUp() function calls. (code)
.middleClick(): .middleClick(x=moveToX, y=moveToY) (Code)
dragTo(): dragTo(x, y, duration=num_seconds) drags mouse to XY. (code)
dragRel(): dragRel(xOffset, yOffset, duration=num_seconds) drags mouse relative to its current position. Three arguments: how many pixels to move horizontally to the right, how many pixels to move vertically downward, and (optionally) how long it should take to complete the movement. (code)
moveRel(): .moveRel(xOffset, yOffset, duration=num_seconds). Moves the mouse cursor relative to its current position. (Code)
.moveTo(x, y, t): x and y: coordinates, and t: time (duration=num_seconds). An optional duration integer or float keyword argument specifies the number of seconds it should
take to move the mouse to the destination. By default, pyautogui.MINIMUM_DURATION is 0.1.
(Code)
   
   
Copy and then store it into memory and it can be pasted for use later (cultiple clipboard) Introduction
matplotlib.pyplot to plot/generate images (with axis/colored text or annotation) Introduction
(Single and multiple enter/input) box for pop-up window Introduction
Image matching with cross correlation and overlap of template edge. In this matching process, Normalized cross-correlation with those edge images is performed. code.
Cross correlation between two images in any sizes. Multiscaling is used to avoid the issue caused by the different sizes of the template and original image, in order to find match in a original image, namely, the size of template is larger than the original image. code
Pop-up windows/messages tkinter, ctypes, easygui
Matrix conversion to image image
Option/selection/choice methods ("pop-up windows of Yes and No ") Introduction
Merge/combine two or more text files (add a new line to the beginning of a text file) Introduction
File name, folder name. {}{}....format. Manipulation of file and folder names (rename file name and folder name): i) Create a new folder and then copy all files from a folder to the new folder and rename the file, and then open the file. If the folder exists, then no file will be copied, but the file will still be opened. ii) Print and export the folder names and file names (with or without extensions) from a folder into a text file. iii) csv2image filename. Introduction
Mixing of using numbers and strings by conversions Introduction
Build databases with different/uncertain number of members Introduction
Numpy: Access the element at the second row, the third entry, access a specific row or a column, access some elements (submatrix), or replace/modify an element in the array, print a transfer of an array, access array under conditions or filtering Introduction
Markers (e.g. color cross, scatter, and circles) at specific coordinates with x- and y-axis Matplotlib
Draw lines manually and then label them with arrows code
Find minimum and maximum values in a list Introduction
Prevent other applications to modify the content until other Python script runs code.
Set the output image to zero everywhere except my mask (color filter), and display red, green, and blue (RGB) channels of an image. code, code.
Count the number of lines (rows) and columns in a txt (and a csv) file, count different numbers in each region in a column, count missing or not available values. code. Introduction
Split columns and merge in csv: Split columns and then merge the splits in a csv file. Introduction
Subtract (minus) two images after resizing them code, code.
Create images with global, adaptive mean, adaptive Gaussian, binary, trunc, Tozero, and tozero thresholds. code
Load/launch/open images and ColorMixing in DigitalMicrograph Introduction
Get the list of the methods for a function Introduction
Modulo operator Introduction
Methods to open google chrome (problems: Google chrome closes immediately after being launched with selenium) Introduction
Move/replace file(s) from one directory to another Introduction
Mean (average, .mean())/.sum()/maximum(.max())/minimum(.min())/number of non-null values(.count())/.median()/variance(.var())/standard deviation(.std()/pstdev()) Introduction
median() in csv Introduction
Get maximum and minimum value of column and its index Introduction
Add markers on a map Introduction
Find latitude and longitude of a place in a map Introduction
Monitor specific new files and execute the file Introduction
Watchdog for monitoring specific file or files with specific extension, and then run another file from watchdog Introduction
Transparency of marker (e.g. for plots) Introduction
Watchdog for monitoring specific file or files with specific extension Introduction
Monitor multiple changed of folder and files Introduction
Monitor the current folder Introduction
Move/copy all files from original folder in a directory to a new directory Introduction
Subtract/minus one image from another image Introduction
Top (ranking, best, must know) Python libraries/modules Introduction
Resize and then sum/mix/overlap two images Introduction
Measure length/distance on an image w/o calibrated bar Introduction
Modify file path/directory by changing folder names by merging a list Introduction
Modify a list (e.g. add/insert/remove an item between items, merge all items) Introduction
Merge/combine two pptx files into one, including merging the pptx files with the words in a sentence as file names (not all words has pptx files) Introduction
Applications of artificial intelligence/machine learning in industry Introduction
Write contents of DataFrame/memory into text file Introduction
Manual analysis of data Introduction
Ranking/most popular programming languages for data analysts Introduction
Ranking and votes of essential/most important skills for data analysts Introduction
Ranking/most popular automation testing tools Introduction
Ranking/most popular IT automation software tools Introduction
Ranking/most popular machine learning frameworks used by data scientists Introduction
Comparison of qualifications and skills between data science manager, engineering and scientist Introduction
Extract a mask from an image with a threshold Introduction
Wafer map failure pattern recognition (WMFPR) and similarity ranking (SR) Introduction
Support-vector machines(SVM)/support-vector networks(SVN) Introduction
Wafer map Introduction
Model-based clustering Introduction
K-Means clustering for images Introduction
Comparison between machine learning and human beings Introduction
Mask an image with a threshold or with a color as a threshold Introduction
Self-supervised machine learning Introduction
List of notations for machine learning application to wafers Introduction
Similarity-based clustering method (SCM) Introduction
Class Activation Mapping (CAM) Introduction
AI/machine learning algorism for text analysis Introduction
Autonomous vehicles/cars and machine learning Introduction
Overfitting in machine learning Introduction
Misclassification rate (classification error rate or error rate) in machine learning Introduction
   
Multiple linear regression Introduction
Confusion matrix heatmap Introduction
Evaluation of Precision in Machine Learning Process Introduction
Recall (Sensitivity or True Positive Rate) in machine learning Introduction
Bayes' theorem (Bayes rule or Bayes law) in machine learning Introduction
Strong machine learning and NLP departments in universities Introduction
Convert a list to a matrix Introduction
Keyword Module in Python Introduction
Electrical characteristics of the MOS capacitor Introduction
Nearest/most similar lyrics of a sentence to a CSV file Introduction
Machine learning applications in electron microscopy Introduction
Find the same elements in columns in two separate dataframes and then merge them Introduction
Trick: pd.concat() for merging/adding (two) columns Introduction
Codes: Automation of Mouse Movements and Clicks, and keyboard control (comparison among pyautogui, pygetwindow, pydirectinput, autoit, Quartz, platform, ctypes, uiautomation and Sikuli) Introduction
Principle and troubleshooting: Automation of Mouse Movements and Clicks (comparison among pyautogui, pygetwindow, pydirectinput, autoit, Quartz, platform, ctypes, uiautomation and Sikuli) Introduction
Click a menus of an application Introduction
Difference/comparison between real mouse click and click from script/program, e.g. Pyautogui
Introduction
Trick: Get coordinate difference between mouse positions Introduction
Module import and execution are skipped during script execution Introduction
Good research topics in the field of semiconductor manufacturing and computer vision Introduction
Scalability in automation and machine learning projects Introduction
   
Compare (pattern/ratio of) two different columns, check whether column values match in DataFrame Introduction
Check whether one column contains number only and another column contains letters only or mixture of numbers and letters in DataFrame Introduction
Check the difference between two columns in DataFrame Introduction
   
   
Multimodal text and image similarity Introduction
Continue script execution no matter whether some try fails or not (finally)
Introduction
Calculating the area fraction of each circle overlapped by a square grid and build wafer map Introduction
Machine learning applications in electron microscopy Introduction
Paraphrase mining Introduction
Multimodal text and image search Introduction
Remove/reload/unload an imported module/function/script Introduction
Access and use SQL Database on SSMS (Microsoft SQL Server Management Studio Express) with pyodbc Introduction
Generate a file name, folder name. {}{}....format. Manipulation of file and folder names (rename file name and folder name): i) Rename the file, and then open the file. If the folder exists, then no file will be copied, but the file will still be opened. ii) Print and export the folder names and file names (with or without extensions) from a folder into a text file. iii) csv2image filename. Introduction
Python modules to interact with the operating system (os, platform, subprocess, shutils, glob and sys) Introduction
Modify HTML webpage (e.g. with graph network by adding text/hyperlink in) Introduction
Create a log (log.log) file to monitor script execution Introduction
Last n days/weeks/months (.to_datetime(x), .set_index(y), .last(z), .reset_index(), and .max() in pandas) Introduction
Last n days/weeks/months (.to_datetime(x), .set_index(y), .last(z), .reset_index(), and .max() in pandas) Introduction
Check all the imported/current modules/libraries Introduction
Combine multiple images into a single multi-page image or vice versa Introduction
Count the number of the pages in a single multi-page/frame image Introduction
Check if all the (and how many, length of a string) characters in the text are digits/numbers Introduction
Read outlook messages in .msg format Introduction
Separate plot data into the same graph/figure/image from different (multiple) csv files for each category Introduction
Plot multiple images on the same figure by hiding x- and y-labels Introduction
Plot multiple datasets on the same scatter graph with different x- and y-axis values Introduction
Inside/outside edges/margins of plotted images Introduction
Change date/month/year format Introduction
Plot figures with date/month/year Introduction
Sort dates/year/month by order Introduction
Avoid two or multiple plots being wrongly/incorrectly/unnecessarily mixed/overlap Introduction
Merge dictionaries (update(), **, chain(), ChainMap(), |, |=) Introduction
Remove rows if (multiple) NaN is more than a number in DataFrame Introduction
Multinomial Naive Bayes algorithm Introduction
Feature importance for Multinomial Naive Bayes algorithm Introduction
Test process in machine learning Introduction
Multiple headers in a csv file: Count the number of header rows first and then split a single csv file to multiple csv files Introduction
Copy a file or all files (with os.mkdir) to save to somewhere (create a directory first if it does not exist) Introduction
Estimate the file size in memory before saving to PC Introduction
Matplotlib Introduction
Merge columns which contain specific strings Introduction
Merge rows/columns of a csv file into an old csv file if the rows/columns are not in the old csv file Introduction
Optimizing failure analysis processes in semiconductor labs using machine learning Introduction
ML for failure analysis in the semiconductor industry Introduction
Sort DataFrame by dates/year/month order Introduction
Merge columns with character/symbol Separation Introduction
Plot workflow: Create new empty column in DataFrameMove the cells in a column to another column under certain conditionSelect specific columns for scatter plot
Create table with merged cells on pptx Introduction
Embed/hide codes or markers into HTML files Introduction
Comparative overview of multivariate statistical methods (Correlation Analysis, Regression Analysis, Factor Analysis, Cluster Analysis, Principal Component Analysis (PCA), Canonical Correlation Analysis, Discriminant Analysis, Path Analysis, Structural Equation Modeling (SEM), Multivariate Analysis of Variance (MANOVA), Analysis of Covariance (ANCOVA) ): purposes, variables, and outputs Introduction
Software/interface used in data science and machine learning Introduction

 

   
display.max_rows CSV: Sets the maximum number of rows pandas should output when printing out various output. (code)
display.max_columns CSV: Sets the maximum number of columns pandas should output when printing out various output. (code)
mangle_dupe_cols CSV: boolean, default True, then duplicate columns will be specified as ‘X.0’...’X.N’, rather than ‘X’...’X’
mode CSV: Python write mode, default ‘w’
model_selection (code). (code)
metrics (code). (code).
sys.modules Is a dictionary mapping the names of imported modules to the module object holding the code. code.
map() code. Introduction
import tkinter.messagebox Import messagebox from tkinter module. code.
main() code.
from pptx.enum.shapes import MSO_AUTO_SHAPE_TYPE (code)
__mod__  
__mul__  
Some functions from the python math module
import math It imports math. Example code
math.ceil() The ceiling of a given number is the nearest integer greater than or equal to that number. For example, the ceiling of 4.568 is 5. Code
math.floor() The floor of a given number is the nearest integer smaller than or equal to that number. For example the floor of 4.68 is 4 and that of 4 is also 4.
math.sqrt() Calculate the square root of a number by importing math and using math.sqrt()
math.acos() Returns the arc cosine of x in radians.
math.atan() Returns the arc tangent of x, in radians.
math.e Returns the mathematical constant e (2.718281 . . .).
math.pi Returns the mathematical constant pi (3.141592 . . .).
math.exp() Returns e raised to the power x, where e is the base of natural logarithms.
math.pow(x, y) Returns x raised to the power y. code1, code2.
pow(x, y, z) x raise to the power y and reminder by z.
** Introduction. Exponentiation, or called power; arbitrary variables.
math.log(x,y) Returns the natural logarithm of x to base y.
math.log2(x) Returns the base-2 logarithm of x.
math.expm1()  
math.radians(x) Converts angle x from degrees to radians.
math.tan(x) Returns the tangent of x radians.
math.acosh()  
math.atan2() Returns atan(y / x), in radians.
math.cos()  
math.erf()  
math.fabs() The absolute value of a number
math.sin(x) Returns the arc sine of x, in radians.
math.asin()  
math.atanh()  
math.cosh()  
math.erfc()  
math.factorial() Factorial: The factorial of a number x is defined as the continued product of the numbers from 1 to that value. code.
math.tau() Returns the mathematical constant tau (6.283185 . . .).
math.asinh()  
math.ceil()  
math.degrees() Converts angle x from radians to degrees.
math.isnan(x) Returns True if x is not a number, otherwise returns False.
math.copysign(x, y) Copy sign: The sign of the second argument is returned along with the result on the execution of this function. x: Integer value to be converted, y: Integer whose sign is required. Example code
 
if __name__ == '__main__' Introduction. The processes starts reading the current file in order to execute the function specified. Without this clause, the import would first execute more process start calls, before getting to the function execution. code. code. code. code. Application example: run the page4853main3 program (as a module) through page4853main4 program. A similar example with defined functions is page4853main5.py. (code).
__mul__  
with mss.mss() as sct (code)
mss.tools.to_png (code)
year, month, date, hour, minute, and second "datefmt='%Y-%m-%d %H:%M:%S')": year, month, date, hour, minute, and second. Instruction.
year, month, date, hour, minute, and second "datefmt='%Y-%m-%d %H:%M:%S')": year, month, date, hour, minute, and second. Instruction.
makedirs() (code)
mainloop() This function starts a never ending event loop, and the program stays in this loop until we close the main window. (Code). code. code. (code). (code).
Multiple assignment E.g. a, b = 4, 3
MyClass(object) code.
ctypes.windll.user32.MessageBoxW Code. Code.
Mbox() Code.
Matrix and vector products
dot(a, b[, out]) Dot product of two arrays.
linalg.multi_dot(arrays, *[, out]) Compute the dot product of two or more arrays in a single function call, while automatically selecting the fastest evaluation order.
vdot(a, b) Return the dot product of two vectors.
inner(a, b) Inner product of two arrays.
outer(a, b[, out]) Compute the outer product of two vectors.
matmul(x1, x2, /[, out, casting, order, …]) Matrix product of two arrays.
tensordot(a, b[, axes]) Compute tensor dot product along specified axes.
einsum(subscripts, *operands[, out, dtype, …]) Evaluates the Einstein summation convention on the operands.
einsum_path(subscripts, *operands[, optimize]) Evaluates the lowest cost contraction order for an einsum expression by considering the creation of intermediate arrays.
linalg.matrix_power(a, n) Raise a square matrix to the (integer) power n.
kron(a, b) Kronecker product of two arrays.
Matrix eigenvalues
linalg.eig(a) Compute the eigenvalues and right eigenvectors of a square array.
linalg.eigh(a[, UPLO]) Return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix.
linalg.eigvals(a) Compute the eigenvalues of a general matrix.
linalg.eigvalsh(a[, UPLO]) Compute the eigenvalues of a complex Hermitian or real symmetric matrix.
 
linalg.matrix_rank(M[, tol, hermitian]) Return matrix rank of array using SVD method
__call__/method-wrapper name/type: implementation of the () operator; a.k.a. the callable object protocol
__get__/method-wrapper name/type: implementation of the read-only descriptor protocol (see XREF)
skimage.metrics from skimage.metrics import structural_similarity as compare_ssim
import argparse
import imutils
import cv2
code
skimage.measure.approximate_polygon(coords, ...) Approximate a polygonal chain with the specified tolerance.
skimage.measure.block_reduce(image, block_size) Down-sample image by applying function to local blocks.
skimage.measure.compare_mse(im1, im2) Compute the mean-squared error between two images.
skimage.measure.compare_nrmse(im_true, im_test) Compute the normalized root mean-squared error (NRMSE) between two images.
skimage.measure.compare_psnr(im_true, im_test) Compute the peak signal to noise ratio (PSNR) for an image.
skimage.measure.correct_mesh_orientation(...) Correct orientations of mesh faces.
skimage.measure.find_contours(array, level) Find iso-valued contours in a 2D array for a given level value.
skimage.measure.grid_points_in_poly Test whether points on a specified grid are inside a polygon.
skimage.measure.label(input[, neighbors, ...]) Label connected regions of an integer array.
skimage.measure.marching_cubes(volume, level) Marching cubes algorithm to find iso-valued surfaces in 3d volumetric data
skimage.measure.mesh_surface_area(verts, faces) Compute surface area, given vertices & triangular faces
skimage.measure.moments(image[, order]) Calculate all raw image moments up to a certain order.
skimage.measure.moments_central(image, cr, cc) Calculate all central image moments up to a certain order.
skimage.measure.moments_hu(nu) Calculate Hu’s set of image moments.
skimage.measure.moments_normalized(mu[, order]) Calculate all normalized central image moments up to a certain order.
skimage.measure.perimeter(image[, neighbourhood]) Calculate total perimeter of all objects in binary image.
skimage.measure.points_in_poly Test whether points lie inside a polygon.
skimage.measure.profile_line(img, src, dst) Return the intensity profile of an image measured along a scan line.
skimage.measure.ransac(data, model_class, ...) Fit a model to data with the RANSAC (random sample consensus) algorithm.
skimage.measure.regionprops(label_image[, ...]) Measure properties of labeled image regions.
skimage.measure.structural_similarity(*args, ...) Deprecated function. Use compare_ssim instead.
skimage.measure.subdivide_polygon(coords[, ...]) Subdivision of polygonal curves using B-Splines.
skimage.measure.CircleModel() Total least squares estimator for 2D circles.
skimage.measure.EllipseModel() Total least squares estimator for 2D ellipses.
skimage.measure.LineModel() Total least squares estimator for 2D lines.
skimage.measure.LineModelND() Total least squares estimator for N-dimensional lines.
Mahotas

Is a computer vision and image processing library for Python. It includes many algorithms implemented in C++ for speed while operating in numpy arrays and with a very clean Python interface. Mahotas currently has over 100 functions for image processing and computer vision and it keeps growing. Some examples of mahotas functionality:
         watershed
         convex points calculations.
         hit & miss. thinning
         Zernike & Haralick, local binary patterns, and TAS features
         morphological processing
         Speeded-Up Robust Features (SURF)
         thresholding
         convolution.
         Sobel edge detection.

multithreading Comparison between multithreading, multiprocessing and asyncio at page4797.
multiprocessing Is a parallel processing library that relies on subprocesses, rather than threads. Creating a process does not start it: for that use the start function. Execution of the process is not guaranteed until the .join() function is called on it. Arguments can be passed to the function of the process with the args keyword. This accepts a list (or tuple) of arguments, leading to a somewhat strange syntax for a single argument: proc = Process(target=print_func, args=(name,)). Introduction. Comparison between multithreading, multiprocessing and asyncio at page4797.
MILK This machine learning toolkit focuses on supervised classification with a gamut of classifiers available: SVM, k-NN, random forests, decision trees. A range of combination of these classifiers gives different classification systems. For unsupervised learning, one can use k-means clustering and affinity propagation. There is a strong emphasis on speed and low memory usage. Therefore, most of the performance-sensitive code is in C++.
cv2.matchTemplate Introduction. Returns a correlation map, essentially a grayscale image. Other than contour filtering, matching keypoints, contour detection and processing (with thresholding, edge detection, etc. to generate a binary image), template matching is arguably one of the most simple forms of object detection (only 2-3 lines of code), which can detect multiple instances of the same/similar object in an input image. This method quickly fails when there are unknown changes of rotation, scale, viewing angle, etc. In those cases, you should use dedicated object detectors including HOG + Linear SVM, Faster R-CNN, SSDs, YOLO, etc. code. code. code. code. code.
Limitations: The matching can fail (if there is no special treatments in the script) if the size of the template is substantially smaller than the feature in the image being searched.
(min_val, max_val, min_loc, max_loc) = cv2.minMaxLoc() Returns the max and min intensity values as an array
that includes the location of these intensities. Takes the correlation result and returns a 4-tuple which includes the minimum correlation value, the maximum correlation value, the (x, y)-coordinate of the minimum value, and the (x, y)-coordinate of the maximum value, respectively. Max_Val is the location with the highest intensity in the image, corresponding to the best matching input image with regard to the defined template.. code. code. code.
(cv2 or win32gui).moveWindow Set the position (coordinates) of the opened window. code. (code)
from matplotlib import pyplot as plt == import matplotlib.pyplot as plt. (code).
matplotlib-scalebar Display a scale bar, aka micron bar. It is particularly useful when displaying calibrated images plotted using plt.imshow(...). Introduction.
ScaleBar() Introduction. scalebar = ScaleBar(dx, units="m", dimension="si-length", label=None, length_fraction=None, height_fraction=None, width_fraction=None, location=None, pad=None, border_pad=None, sep=None, frameon=None, color=None, box_color=None, box_alpha=None, scale_loc=None, label_loc=None, font_properties=None, label_formatter=None, scale_formatter=None, fixed_value=None, fixed_units=None, animated=False, rotation=None)
dx (required):
        Size of one pixel in units specified by the next argument.
units:
        Units of dx. The units needs to be valid for the specified dimension. Default: m.
label:
        Optional label associated with the scale bar. Default: None, no label is shown. The position of the label with respect to the scale bar can be adjusted using label_loc argument.
length_fraction:
        Desired length of the scale bar as a fraction of the subplot's width. Default: None, value from matplotlibrc or 0.2.
height_fraction:
        Deprecated, use width_fraction.
width_fraction:
        Width of the scale bar as a fraction of the subplot's height. Default: None, value from matplotlibrc or 0.01.
loc:
        Alias for location.
pad:
        Padding inside the box, as a fraction of the font size. Default: None, value from matplotlibrc or 0.2.
border_pad:
        Padding outside the box, fraction of the font size. Default: None, value from matplotlibrc or 0.1.
sep:
        Separation in points between the scale bar and scale, and between the scale bar and label. Default: None, value from matplotlibrc or 5.
frameon:
        Whether to draw a box behind the scale bar, scale and label. Default: None, value from matplotlibrc or True.
color:
        Color for the scale bar, scale and label. Default: None, value from matplotlibrc or k (black).
box_color:
        Background color of the box. Default: None, value from matplotlibrc or w (white).
box_alpha:
        Transparency of box. Default: None, value from matplotlibrc or 1.0 (opaque).
scale_loc:
        Location of the scale with respect to the scale bar. Either bottom, top, left, right. Default: None, value from matplotlibrc or bottom.
label_loc:
        Location of the label with respect to the scale bar. Either bottom, top, left, right. Default: None, value from matplotlibrc or top.
font_properties:
        Font properties of the scale and label text, specified either as dict or str. See FontProperties for the arguments. Default: None, default font properties of matplotlib.
label_formatter:
        Deprecated, use scale_formatter.
scale_formatter:
        Custom function called to format the scale. Needs to take 2 arguments - the scale value and the unit. Default: None which results in.
fixed_value:
        Value for the scale. The length of the scale bar is calculated based on the specified pixel size dx. Default: None, the value is automatically determined based on length_fraction.
fixed_units:
        Units of the fixed_value. Default: None, if fixed value is not None, the units of dx are used.
animated:
        Animation state. Default: False
rotation:
        Whether to create a scale bar based on the x-axis (default) or y-axis. rotation can either be horizontal or vertical. Note you might have to adjust scale_loc and label_loc to achieve desired layout. Default: None, value from matplotlibrc or horizontal.
Dimension of dx and units. It can either be equal:
        si-length (default): scale bar showing km, m, cm, etc.
         imperial-length: scale bar showing in, ft, yd, mi, etc.
         si-length-reciprocal: scale bar showing 1/m, 1/cm, etc.
         pixel-length: scale bar showing px, kpx, Mpx, etc.
         angle: scale bar showing °, ʹ (minute of arc) or ʹʹ (second of arc)
         a matplotlib_scalebar.dimension._Dimension object.
Dimension of dx and units. It can either be equal:
        si-length (default): scale bar showing km, m, cm, etc.
Colors

cmaps['Perceptually Uniform Sequential'] = ['viridis', 'plasma', 'inferno', 'magma', 'cividis']
cmaps['Sequential'] = ['Greys', 'Purples', 'Blues', 'Greens', 'Oranges', 'Reds', 'YlOrBr', 'YlOrRd', 'OrRd', 'PuRd', 'RdPu', 'BuPu', 'GnBu', 'PuBu', 'YlGnBu', 'PuBuGn', 'BuGn', 'YlGn']
cmaps['Sequential (2)'] = ['binary', 'gist_yarg', 'gist_gray', 'gray', 'bone', 'pink', 'spring', 'summer', 'autumn', 'winter', 'cool', 'Wistia', 'hot', 'afmhot', 'gist_heat', 'copper']
cmaps['Diverging'] = [ 'PiYG', 'PRGn', 'BrBG', 'PuOr', 'RdGy', 'RdBu', 'RdYlBu', 'RdYlGn', 'Spectral', 'coolwarm', 'bwr', 'seismic']
cmaps['Cyclic'] = ['twilight', 'twilight_shifted', 'hsv']
cmaps['Qualitative'] = ['Pastel1', 'Pastel2', 'Paired', 'Accent', 'Dark2', 'Set1', 'Set2', 'Set3',
'tab10', 'tab20', 'tab20b', 'tab20c']
cmaps['Miscellaneous'] = ['flag', 'prism', 'ocean', 'gist_earth', 'terrain', 'gist_stern', 'gnuplot', 'gnuplot2', 'CMRmap', 'cubehelix', 'brg', 'gist_rainbow', 'rainbow', 'jet', 'turbo', 'nipy_spectral', 'gist_ncar']. code

Matplotlib.markers An amazing visualization library for 2D plots of arrays. Marker “X”: x (filled); “.”: point; “,“: pixel; “o”: circle; “v”: triangle_down; “^”: triangle_up; “<": triangle_left; “>”: triangle_right; “1”: tri_down; “2”: tri_up; “3”: tri_left; “4”: tri_right; “8”: octagon; “s”: square; “p”: pentagon; “P”: plus (filled); “*”: star; “h”: hexagon1; “H”: hexagon2; “+”: plus; “x”: x; “D”: diamond; “d”: thin_diamond; “|”: vline; “_”: hline; 0 (TICKLEFT): tickleft; 1 (TICKRIGHT): tickright; 2 (TICKUP): tickup; 3 (TICKDOWN): tickdown; 4 (CARETLEFT): caretleft; 5 (CARETRIGHT): caretright; 6 (CARETUP): caretup; 7 (CARETDOWN): caretdown; 8 (CARETLEFTBASE): caretleft (centered at base); 9 (CARETRIGHTBASE): caretright (centered at base); 10 (CARETUPBASE): caretup (centered at base); 11 (CARETDOWNBASE): caretdown (centered at base); "None", ” ” or “”: nothing; ‘$…$’: Render the string using mathtext. E.g “$r$” for marker showing the letter r; verts: A list of (x, y) pairs used for Path vertices. The center of the marker is located at (0, 0) and the size is normalized, such that the created path is encapsulated inside the unit cell; path: A Path instance; (numsides, style, angle): The marker can also be a tuple (numsides, style, angle), which will create a custom, regular symbol. A) numsides: the number of sides. B) style: the style of the regular symbol, 0: a regular polygon 1: a star-like symbol, 2: an asterisk. code.
matplotlib.pyplot code. (code)
matplotlib.cbook (code) (code)
.rcParams All of the rc settings are stored in a dictionary-like variable called matplotlib.rcParams. (code)
matplotlib.pyplot.xticks() The annotate() function is used to get and set the current tick locations and labels of the x-axis. code. code.
matplotlib.pyplot.yticks() The annotate() function is used to get and set the current tick locations and labels of the y-axis. code.
Series.plot method arguments
label Label for plot legend
ax matplotlib subplot object to plot on; if nothing passed, uses active matplotlib subplot
style Style string, like 'ko--', to be passed to matplotlib
alpha The plot fill opacity (from 0 to 1)
kind Can be 'area', 'bar', 'barh', 'density', 'hist', 'kde', 'line', 'pie'
logy Use logarithmic scaling on the y-axis
use_index Use the object index for tick labels
rot Rotation of tick labels (0 through 360)
xticks Values to use for x-axis ticks
yticks Values to use for y-axis ticks
xlim x-axis limits (e.g., [0, 10])
ylim y-axis limits
grid Display axis grid (on by default)
DataFrame-specific plot arguments
sharex If subplots=True, share the same x-axis, linking ticks and limits
sharey If subplots=True, share the same y-axis
figsize Size of figure to create as tuple
title Plot title as string
legend Add a subplot legend (True by default)
sort_columns Plot columns in alphabetical order; by default uses existing column order
xy= code.
xytext= code.
arrowprops= code.
.annotate() code.
arrowstyle code.
->, <-> dashed single arrow line. dashed double arrow line.
va='center' code.
multialignment='right' code.
'ls' code.
plt.plot() Combine multiple plots and plot continuous curve, solid green ('-g'), dashed green ('--g'), dashdot ('.g'), dotted (':g'). Plot by different grouping and summing. Introduction. code.
plt.scatter() Plot scattered curves. Introduction.
.subplots/.subplot Plot each DataFrame column in a separate subplot. E.g. .subplots(nrows=5, ncols=10): 5 rows and 13 columns. The third argument specifies which rectangle will contain the plot specified by the following function calls. As a convenience, the commas separating the three arguments in the subplot routine can be omitted, provided they are all single-digit arguments. E.g. plt.subplot(2, 1, 1) = plt.subplot(211). Can be used to compare different views of data side by side in an array. code. code. image. code. (code).
pyplot.subplots options
nrows Number of rows of subplots
ncols Number of columns of subplots
sharex All subplots should use the same x-axis ticks (adjusting the xlim will affect all subplots)
sharey All subplots should use the same y-axis ticks (adjusting the ylim will affect all subplots)
subplot_kw Dict of keywords passed to add_subplot call used to create each subplot
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None), or plt.subplots_adjust(wspace=0, hspace=0) Adjusting the spacing around subplots. code.
**fig_kw Additional keywords to subplots are used when creating the figure, such as plt.subplots(2, 2, figsize=(8, 6))
 
Merge two csv files CSV: Introduction
Split columns and merge in csv CSV: Split columns and then merge the splits in a csv file. Introduction
Count the number of lines (rows) and columns in a txt (and a csv) file, count different numbers in each region in a column, count missing or not available values CSV: Introduction. code.
.maximize_window() (code)
mediapipe MediaPipe offers ready-to-use yet customizable Python solutions as a prebuilt Python package.
.move_range() (code)
isMaximized (code)
.isMinimized (code)
.maximize() (code)
.activate() Activate a window: with the active cursor in the window and the window is brought to the most front on the monitor. (code)
make_pipeline (code).
min_count= The minimum count of words to consider when training the model; words with occurrence less than this count will be ignored. The default for min_count is 5. (code). (code).
from sklearn.manifold import TSNE (code).
model.wv[] (code).
Doc2Vec/model.wv.most_similar() Doc2Vec.most_similar(positive=[], negative=[], topn=10, restrict_vocab=None, indexer=None). Find the top-N most similar words. Positive words contribute positively towards the similarity, negative words negatively. This method computes cosine similarity between a simple mean of the projection weight vectors of the given words and the vectors for each word in the model. If topn is False, most_similar returns the vector of similarity scores; restrict_vocab is an optional integer which limits the range of vectors which are searched for most-similar values, e.g. restrict_vocab=10000 would only check the first 10000 word vectors in the vocabulary order. (code).
MSO_CONNECTOR.STRAIGHT shapes.add_connector(MSO_CONNECTOR.STRAIGHT, Begin_x, Begin_y, End_x, End_y). (code).
Mm Inches, Emu, Cm, Mm, Pt, and Px are base class for length classes, providing properties for converting length values to convenient units.
.pixelMatchesColor() Introduction
shutil.move() (code)
np.ma.masked_where() (code)
sklearn.cluster.KMeans() Introduction

 

 

 

=================================================================================