vs code
atom
nano
The most simple option with self-explaining interface.Vim
More complex editor with many features. You should get a vim cheat sheet in the beginning.Emacs
More complex editor with many features. You should get a emacs cheat sheet in the beginning.git
- The Git Book is a good place to start.
- You can use
git guito create commits andgitkto view the history.
GitHub
Host for git repositories.
- You can create free private repositories with a student account.
BitBucket
Free alternative to GitHub with private repositories.Markdown + pandoc
Good for writing pdf documents quickly. Not as nice as LaTeX but good enough for exercises.
- Can compile markdown to pdf, html and many others.
- Allows inline html and latex formulas, ...
Detexify ↗
Draw the symbol you need and detexify will tell you the corresponding LaTeX command and package.Syntex
Use LaTeX with --synctex=1 to link the produced pdf to your LaTeX source code. If you have syntex support in your pdf/ps/dvi viewer and your editor, you can ctrl-click on a paragraph to scroll to it and get it highlighted in the other document.
latexmk
Automagically performs all steps needed to create the index, BibTeX/Biber, references, ...
- You can get a continuous preview with the
-pvcoption.
JabRef ↗
Tool to manage your BibTeX references. You can search and tag the references, link them to pdfs and add summaries.IguanaTex ↗
PowerPoint plugin to use LaTeX formulas in your document.TexMaker
VS-Code plugin
Sublime plugin
- LaTeXTools
- LaTeX-cwl
LyX ↗
WYSIWYM editor for documents which uses LaTeX internally and also exports to LaTeX code. The document and formulas are shown similarly to the final document. Mathematical formulas can either be written using the LaTeX code or with the various shortcuts (e.g Alt-M G A for alpha (read "alt math Greek alpha"), Alt-M I for integrals).
- You can write raw LaTeX via
ctrl-Lfor features that are not natively supported by LyX.
gdb
- You can modify the startup script
~/.gdbinit. There exists various init files to support colored output (copy this file in your init file) and many other other features. - If you want to debug a program wich takes command line arguments you can pass them like
gdb --args program param1 param2. - You can print the first three elements of arrays using
p *ptr@3. If you have a 3x2 matrix you can also usep *ptr@3@2which will give a clearer structure to the output thanp *ptr@6.
gdbgui ↗
"Browser-based debugger for C, C++, go, rust, and more"Valgrind
Useful to find difficult memory bugs when gdb doesn't catch them or doesn't give any useful information. Examples are double free-bugs, bugs which corrupted the allocator meta data (in this case you might get an error the next time you try to allocate any new memory) or reading uninitialized memory.
- You can use the flag
valgrind --track-origins=yesto make valgrind track and report where you allocated uninitialized memory. - Besides memory checks with the default
--tool=memcheckthere also exist many other tools. E.g.--tool=cachegrind, which compute cache misses for the instruction cache and memory cache. - Warning: valgrind will make your program run really slow.
perf
Flamegraph ↗
Nice way to visualize the results of perf.
perf script | ~/FlameGraph/stackcollapse-perf.pl | ~/FlameGraph/flamegraph.pl > flamegraph.svgcreates an interactive svg image from the perf script.- You can also mix it with some
grep,sed, oderc++filt. - There also exists a module for python.
Valgrind
- For measuring cache misses. See the valgrind section in 'Debugging'
c++filt
Demangles C++ names to make them more readable. Nice in combination with profiler output or flamegraphs.cuda-gdb
Gdb with cuda extension. You can also set breakpoints in kernels and switch between threads to inspect the variables.
- You can also create an init file
~/cuda-gdbinit. Just use the same file as forgdbif you want colored backtraces. - To break on API errors like failed kernel launches or other error codes use
set cuda api_failures stop. - To check for invalid memory addresses, you can use
set cuda memcheck onto enable something likevalgrind --tool=memcheckfor cuda. Warning: This makes your program much slower. - TODO: problem with breakpoints on gpu connected to display.
nvprof
Command line profiler for Cuda programs. You can also generate a file, which can be imported to nvvp using --analysis-metrics -o file. This helps with profiling a remote program.
- You can output the profiling in CSV format with a common time unit using
--csv -u us. - Profiling can be limited to specific kernels using
--kernels my_kernel, which applies to following--analysis-metrics,--eventsor--metricsoptions. - You can control the GPUs visible to your program by setting the environment variable
CUDA_VISIBLE_DEVICES. Example:CUDA_VISIBLE_DEVICES=0,2masks out GPU 1. Runnvidia-smito get the number of each GPU.
matplotlib
Python library for plotting.gnuplot
Language especially for plotting. Can export to many formats including png, svg, latex.
- You can use the init file
.gnuplotto run code or set settings startup - Can fit arbitrary parameters to compute a function that approximates the data points using
fit. - You can also plot data using the output of shell commands:
plot '< python gen_data.py'orplot '< sed -n "s/^# //p" file'or even with pipesplot '< cat data/* | sed -n "s#re=\(.*\)#\1#p"'
PGFPlots
Handy LaTeX package to create plots directly in LaTeX. Can plot data in CSV or gnuplot format. Supports diagrams, graphs, box plots, 3d plots and many more.
- There are also higher level features as loops and random numbers.
- Becomes slow for many plots. You can avoid the recomputation of the plots by compiling them in another document into an PDF and include it with
\includegraphics. This is done automatically if you use\usepgfplotslibrary{external}and\tikzexternalize[prefix=TikzPictures/]in your preamble. - You can use gnuplot to plot your data.
Zathura
Ocular
Fish Shell ↗
Shell with useful autocompletion and many other features.ZSH
Shell with useful autocompletion and many other features.
- To get started, oh-my-zsh is good to manage your zsh configuration.
Slurm
Job manager.
srun --ntasks=42 script.shallocates 42 tasks and runs the job in your terminal. The default is one task per node.srun --ntasks=42 --pty bashallocates 42 tasks and starts an interactive session. Useexitto exit the interactive session.sbatch --ntasks=1 script.shallocates and runs script. script gets copied to an other location and is executed, once there are enough resources available. In contrast tosrunthe script is only run on the first node! You can usesruninside the batch script.squeueto see the current jobs in the job queue.scancelto kill your jobs or revoke them from the queue.salloc --ntasks=42allocate recources for yourself, but stay on login node. If you want to use the recources usesrunafterwards. Useful if one job contains multiplesruncommands, as you don't have to reallocate recources for each job. Useexitto exit the allocation.- Use
--job-name="Bob"to give your job a descriptive name. - Use
--time=8:00:00to set the upper limit for the runtime of your program. - If you run a batch script with
srunorsbatchyou can also define the command line parameters inside the script using#SBATCH --ntasks=42.
srun -n4 hostname # runs hostname on four nodes
# prints allocated compute nodes
salloc -n4 # allocate four nodes
hostname
# print the current login node
srun hostname # runs hostname on all allocated nodes
# prints allocated compute nodes
srun -n2 hostname # runs hostname on two of the allocated nodes
# prints allocated compute nodes
exit
echo -e '#!/usr/bin/env bash\nhostname' > script.sh
sbatch -n4 script.sh # submits the script
# returns immediately and stores the output of the job into a file
# output file contains only the host name for the first node
echo -e '#!/usr/bin/env bash\nsrun hostname' > script.sh
sbatch -n4 script.sh # submits a script to run hostname on four nodes
# returns immediately and stores the output of the job into a file
# output file contains the host names for all compute nodes
echo -e '#!/usr/bin/env bash\n#SBATCH -n4\n#SBATCH --output myoutfile\nsrun hostname' > script.sh
sbatch script.sh
# prints the host name of all four allocated nodes into `myoutfile`Some more advanced stuff:
- Slurm sets various environment variables which you can use in your scripts.
- You can queue multiple versions of one
sbatchjob using task arrays with--array=0-17. You can use the environment variableSLURM_ARRAY_TASK_IDin your scripts to find out which array task you are executing.
# program we want to run with different parameters
echo "sleep 3;echo $1" > smartprogram.sh
# batch script which uses the array id to change the parameters
echo -e '#!/usr/bin/env bash\nsrun bash smartprogram.sh $SLURM_ARRAY_TASK_ID' > script.sh
# run a program multiple times
sbatch -n1 --array=3-5 script.sh
# outputs the numbers 3,4 and 5 in three output files
Python
- Numpy for efficient array/vector/matrix operations.
- Scipy offers many useful algorithms. E.g. linear algebra, FFT and optimization.
- Sympy for symbolic computations, integrals and derivatives.
- Matplotlib for plotting.
FEM
- FeniCS: Python
- deal.II: C++