Command line interface
CLI utilities in CCBR Tools
Main CLI
Usage: ccbr_tools [OPTIONS] COMMAND [ARGS]...
Utilities for CCBR Bioinformatics Software
For more options, run: ccbr_tools [command] --help
https://ccbr.github.io/Tools/
Options:
-v, --version Show the version and exit.
-h, --help Show this message and exit.
Commands:
send-email Send an email (works on biowulf)
quarto-add Add a quarto extension
cite Print the citation in the desired format
version Print the version of ccbr_tools
All installed tools:
ccbr_tools
gb2gtf
hf
intersect
jobby
jobinfo
peek
send-email
Usage: ccbr_tools send-email [OPTIONS] [TO_ADDRESS] [TEXT]
Send an email (works on biowulf)
Arguments:
to_address The email address of the recipient
text The plain text content of the email
Options:
-s, --subject TEXT The subject line of the email
-a, --attach-html PATH The file path to the HTML attachment
-r, --from-addr TEXT The email address of the sender
-d, --debug Return the Email Message object without sending the
email
-h, --help Show this message and exit.
quarto-add
Usage: ccbr_tools quarto-add [OPTIONS] EXT_NAME
Add a quarto extension
Arguments:
ext_name The name of the extension in ccbr_tools
Examples:
ccbr_tools quarto-add fnl
Options:
-h, --help Show this message and exit.
Available extensions: fnl
cite
Usage: ccbr_tools cite [OPTIONS] CITATION_FILE
Print the citation in the desired format
citation_file : Path to a file in Citation File Format (CFF) [default: the
CFF for ccbr_tools]
Options:
-f, --output-format [apalike|bibtex|cff|codemeta|endnote|ris|schema.org|zenodo]
Output format for the citation
-h, --help Show this message and exit.
version
Usage: ccbr_tools version [OPTIONS]
Print the version of ccbr_tools
Options:
-d, --debug Print the path to the VERSION file
-h, --help Show this message and exit.
Additional utilities
gb2gtf
Convert GenBank files to GTF format.
Usage: gb2gtf sequence.gb > sequence.gtf
hf
Finds homologs in human and mouse.
About:
hf or HomologFinder finds homologs in human and mouse.
if the input gene or genelist is human, then it returns mouse homolog(s) and vice versa
Usage:
$ hf -h
Examples:
$ hf -g ZNF365
$ hf -l Wdr53,Zfp365
$ hf -f genelist.txt
usage: hf [-h] [-v] [-g GENE] [-l GENELIST] [-f GENELISTFILE]
Get Human2Mouse (or Mouse2Human) homolog gene or genelist
options:
-h, --help show this help message and exit
-v, --version show program's version number and exit
-g GENE, --gene GENE single gene name
-l GENELIST, --genelist GENELIST
comma separated gene list
-f GENELISTFILE, --genelistfile GENELISTFILE
genelist in file (one gene per line)
intersect
USAGE:
intersect filename1 filename2 f1ColumnIndex F2ColumnIndex
--Ex. intersect file1 file2 0 0
jobby
jobby: Will take your job(s)... and display their information!
Synopsis:
$ jobby [--version] [--help] \
[--scheduler {slurm | ...}] \
[--threads THREADS] [--tmp-dir TMP_DIR] \
<JOB_ID [JOB_ID ...]>
Description:
jobby will take your past jobs and display their job information
in a standardized format. Why???! We have pipelines running on several
different clusters (using different job schedulers). jobby centralizes
and abstracts the process of querying different job schedulers within
a unified command-line interface.
For each supported scheduler, jobby will determine the best method
on a given target system for getting job information to return to the
user in a common output format.
Required Positional Arguments:
<JOB_ID [JOB_ID ...]>
Identiers of past jobs. One or more JOB_IDs
can be provided. Multiple JOB_IDs should be
separated by a space. Information for each
of the JOB_IDs will be displayed to standard
output. Please see example section below for
more information.
Options:
-s,--scheduler {slurm | ...}
@Default: slurm
Job scheduler. Defines the job scheduler
of the target system. Additional support
for more schedulers coming soon!
@Example: --scheduler slurm
-n, --threads THREADS
@Default: 1
Number of threads to query the scheduler
in parallel.
@Example: --threads: 8
-t, --tmp-dir TMP_DIR
@Default: /tmp/
Temporary directory. Path on the filesystem
for writing temporary output files. Ideally,
this path should point to a dedicated space
on the filesystem for writing tmp files. If
you need to inject a variable into this path
that should NOT be expanded, please quote the
options value in single quotes. The default
location of this option is set to the system
default via the $TMPDIR environment variable.
@Example: --tmp-dir '/scratch/$USER/'
-h, --help Shows help and usage information and exits.
@Example: --help
-v, --version Displays version information and exits.
@Example: --version
Example:
# Please avoid running jobby
# on a cluster's head node!
./jobby -s slurm -n 4 18627542 13627516 58627597 48627666
Version:
v0.2.0
jobinfo
Get HPC usage metadata for a list of slurm jobids on biowulf
About:
This wrapper script works only on BIOWULF!
This script usage the "dashboard_cli" utility on biowulf to get HPC usage metadata
for a list of slurm jobids. These slurm jobids can be either provided at command
line or extracted from a snakemake.log file. Using snakemake.log file option together
with --failonly option lists path to the STDERR files for failed jobs. This can be
very useful to debug failed Snakemake workflows.
USAGE:
$ jobinfo -h
Example:
$ jobinfo -j 123456,7891011
$ jobinfo -s /path/to/snakemake.log
$ jobinfo -j 123456,7891011 -o /path/to/report.tsv
$ jobinfo -s /path/to/snakemake.log --failonly
usage: jobinfo [-h] [-v] [-j JOBLIST] [-s SNAKEMAKELOG] [-o OUTPUT] [-f]
Get slurm job information using slurm job id or snakemake.log file
options:
-h, --help show this help message and exit
-v, --version show program's version number and exit
-j JOBLIST, --joblist JOBLIST
comma separated list of jobids. Cannot be used
together with -s option.
-s SNAKEMAKELOG, --snakemakelog SNAKEMAKELOG
snakemake.log file. Slurm jobids are extracted from
here. Cannot be used together with -j option.
-o OUTPUT, --output OUTPUT
Path to output file. All jobs (all states) and all
columns are reported in output file.
-f, --failonly output FAILED jobs only (onscreen). Path to the STDERR
files for failed jobs. All jobs are reported with -o
option.
peek
USAGE: peek <file.tsv> [buffer]
Assumptions:
Input file is tab delimited
└── Globbing supported: *.txt
Optional:
buffer = 40 (default)
└── Changing buffer will increase/decrease output justification