Welcome to ArcticAI’s documentation!

CLI

Contains functions for creating command line interfaces.

class arctic_ai.cli.Commands[source]

A class for defining command line interface methods using Google Fire.

None

Methods

cnn_predict([basename, analysis_type, gpu_id])

Generates embeddings from preprocessed images using a trained CNN model.

gnn_predict([basename, analysis_type, ...])

Run GNN prediction on patches.

graph_creation([basename, analysis_type, ...])

Generates a graph based on embeddings of image patches.

im2dzi([in_file, out_dir, compression])

Converts an input image file to Deep Zoom Image format.

ink_detect([basename, compression])

Detect inks in the specified image.

nuclei_predict([predictor_dir, ...])

The nuclei_predict function takes in a set of arguments including predictor_dir, predictor_file, patch_file, threshold, and savenpy.

preprocess([basename, threshold, ...])

Preprocesses an image and generates patches for training or testing.

run_parallel([patient, input_dir, ...])

Runs the entire image analysis workflow on a given patient, in parallel using either a local executor or slurm (more executors will be added).

run_series([patient, input_dir, ...])

Runs the entire image analysis workflow on a given patient.

tif2npy([in_file, out_dir])

Converts a .tif file to a .npy file.

dump_results

dzi_folder_setup

extract_dzis

quality_score

run_series_old

write_dzis

cnn_predict(basename: str = '163_A1a', analysis_type: str = 'tumor', gpu_id: int = -1) None[source]

Generates embeddings from preprocessed images using a trained CNN model.

Parameters:
basenamestr, optional

The base filename of the preprocessed image, default is “163_A1a”.

analysis_typestr, optional

The type of analysis to be performed, default is “tumor”.

gpu_idint, optional

The ID of the GPU to be used for prediction, default is -1 (CPU).

gnn_predict(basename='163_A1a', analysis_type='tumor', radius=256, min_component_size=600, gpu_id=-1, generate_graph=True, no_component_break=True)[source]

Run GNN prediction on patches.

Parameters:
basenamestr, optional

The base filename of the image to be segmented. Default is “163_A1a”.

analysis_typestr, optional

The type of analysis to perform. Default is “tumor”.

radiusint, optional

The radius used to generate the graphs. Default is 256.

min_component_sizeint, optional

The minimum size of the connected components in graph. Default is 600.

gpu_idint, optional

The ID of the GPU to use for prediction. Default is -1, which uses the CPU.

generate_graphbool, optional

Whether to generate the graph data from the preprocessed image. Default is True.

no_component_breakbool, optional

Whether to avoid breaking connected components across patches. Default is True.

Returns:
None
graph_creation(basename: str = '163_A1a', analysis_type: str = 'tumor', radius: int = 256, min_component_size: int = 600, no_component_break: bool = True) None[source]

Generates a graph based on embeddings of image patches.

Parameters:
basenamestr, optional

The base filename of the image to be used for generating the graph. Default is “163_A1a”.

analysis_typestr, optional

The type of analysis to be performed on the image. Default is “tumor”.

radiusint, optional

The radius used for generating the graph. Default is 256.

min_component_sizeint, optional

The minimum size for a connected component in the graph. Default is 600.

no_component_breakbool, optional

If True, the function will not break up large connected components. Default is True.

Returns:
None
im2dzi(in_file='', out_dir='./', compression: float = 1.0)[source]

Converts an input image file to Deep Zoom Image format.

Parameters:
in_filestr

The path to the input image file.

out_dirstr

The output directory where the DZI file will be written. Default is “./”.

compressionfloat

The degree of compression applied to the DZI file. Default is 1.

Returns:
None
ink_detect(basename='163_A1a', compression=8)[source]

Detect inks in the specified image.

Parameters:
basenamestr, optional

The base name of the image to detect inks in, by default “163_A1a”

compressionfloat, optional

The compression factor to use when detecting inks, by default 8.

Returns:
None

The function saves the detected inks to a pickle file, but does not return anything.

nuclei_predict(predictor_dir='./', predictor_file='', patch_file='', threshold=0.05, savenpy='pred.npy') None[source]

The nuclei_predict function takes in a set of arguments including predictor_dir, predictor_file, patch_file, threshold, and savenpy. It loads a model from the specified predictor_dir and predictor_file, runs it on a set of patches in patch_file, and saves the output predictions as an npy stack and label dictionary if savenpy is not None. If savexml is not None, it also saves predictions to an ASAP xml. The function loads the model, flips the patches, runs the model on the patches, and saves the predictions in the specified formats if requested. :param predictor_dir: path to model folder :type predictor_dir: str :param predictor_file: filename of model in the folder (don’t include path to folder) :type predictor_file: str :param patch_file: path to an npy stack of patches :type patch_file: str :param classifier_type: class (from predict.py) :type classifier_type: BasePredictor :param panoptic: whether the model performs panoptic segmentation. If false, it is assumed to do instance segmentation. :type panoptic: bool :param n: number of classes the model classifies into :type n: int :param threshold: threshold to use for model :type threshold: float :param savenpy: Path to file to which to save npy output. The output is a numpy stack of the the masks for each patch. In the mask, nuclei are given a non-zero integer, the ID of the the instance it is a part of. A pickled dictionary mapping the instance ID to the class label is also outputted. If savenpy=None, predictions are not saved in an npy format. :type savenpy: str :param savexml: Path to file to which to save xml output (ASAP format). If savexml=None, predictions are not saved in an xml format. :type savexml: str :param patch_coords: an extra pkl file which specifies the x,y (x is row, y is col) metadata for the patches. Must be provided if exporting to xml, since location is a part of the ASAP format :type patch_coords: str

preprocess(basename: str = '163_A1a', threshold: float = 0.05, patch_size: int = 256, ext: str = '.npy', secondary_patch_size: int = 0, df_section_pieces_file: str = '', image_mask_compression: float = 8.0, dirname: str = '.') None[source]

Preprocesses an image and generates patches for training or testing.

Parameters:
  • basename – The base filename of the image to be preprocessed.

  • threshold – The maximum fraction of a patch that can be blank. Default is 0.05.

  • patch_size – The size of the patches to be generated. Default is 256.

  • ext – The file extension of the input image. Default is “.npy”.

  • secondary_patch_size – The size of the patches for secondary processing. Default is 0.

  • df_section_pieces_file – The filename of the file containing metadata about image patches. Default is “section_pieces.pkl”.

  • image_mask_compression – The degree of compression applied to the image mask. Default is 8.

  • dirname – The directory where input and output files are stored. Default is “.”.

Returns:

None

run_parallel(patient='163_A1', input_dir='inputs', compression=1.0, overwrite=True, record_time=False, ext='.npy', dirname='.', df_section_pieces_file='df_section_pieces.pkl', run_stitch_slide=True)[source]

Runs the entire image analysis workflow on a given patient, in parallel using either a local executor or slurm (more executors will be added).

Parameters:
patientstr, optional

The patient ID to be processed. Default is “163_A1”.

input_dirstr, optional

The directory where the input files are stored. Default is “inputs”.

compressionfloat, optional

The degree of compression applied to the input image. Default is 1.0.

overwritebool, optional

If True, overwrite existing output files. Default is True.

record_timebool, optional

If True, record the time taken for each step of the workflow. Default is False.

extstr, optional

The file extension of the input image. Default is “.npy”.

dirnamestr, optional

The directory where input and output files are stored. Default is “.”.

df_section_pieces_filestr, optional

The filename of the file containing metadata about image patches. Default is “df_section_pieces.pkl”.

This method runs the entire image analysis workflow on a given patient, with options to customize the input and output directories, degree of image compression, file extension, and whether to overwrite existing output files. If record_time is set to True, the time taken for each step of the workflow will be recorded. The default values for patient, input_dir, compression, overwrite, ext, dirname, and df_section_pieces_file are “163_A1”, “inputs”, 1.0, True, “.npy”, “.”, and “df_section_pieces.pkl”, respectively.
run_series(patient='163_A1', input_dir='inputs', compression=1.0, overwrite=True, record_time=False, ext='.npy', dirname='.', df_section_pieces_file='df_section_pieces.pkl', run_stitch_slide=True)[source]

Runs the entire image analysis workflow on a given patient.

Parameters:
patientstr, optional

The patient ID to be processed. Default is “163_A1”.

input_dirstr, optional

The directory where the input files are stored. Default is “inputs”.

compressionfloat, optional

The degree of compression applied to the input image. Default is 1.0.

overwritebool, optional

If True, overwrite existing output files. Default is True.

record_timebool, optional

If True, record the time taken for each step of the workflow. Default is False.

extstr, optional

The file extension of the input image. Default is “.npy”.

dirnamestr, optional

The directory where input and output files are stored. Default is “.”.

df_section_pieces_filestr, optional

The filename of the file containing metadata about image patches. Default is “df_section_pieces.pkl”.

This method runs the entire image analysis workflow on a given patient, with options to customize the input and output directories, degree of image compression, file extension, and whether to overwrite existing output files. If record_time is set to True, the time taken for each step of the workflow will be recorded. The default values for patient, input_dir, compression, overwrite, ext, dirname, and df_section_pieces_file are “163_A1”, “inputs”, 1.0, True, “.npy”, “.”, and “df_section_pieces.pkl”, respectively.
tif2npy(in_file='', out_dir='./')[source]

Converts a .tif file to a .npy file.

Parameters:
in_filestr

The path to the input .tif file.

out_dirstr

The path to the output directory where the .npy file will be saved.

Preprocess

Contains functions for preprocessing images.

arctic_ai.preprocessing.preprocess(basename='163_A1a', threshold=0.05, patch_size=256, ext='.npy', secondary_patch_size=0, alpha=0.0009765625, no_break=False, df_section_pieces_file='', dirname='.', image_mask_compression=1.0, use_section=False)[source]

Preprocesses image and generates patches for training or testing.

Parameters:
basename: str

The base filename of the image to be preprocessed.

threshold: float, optional

The maximum fraction of a patch that can be blank. Default is 0.05.

patch_size: int, optional

The size of the patches to be generated. Default is 256.

ext: str, optional

The file extension of the input image. Default is “.npy”.

no_break: bool, optional

If True, the function will not break large images into multiple smaller ones. Default is False.

df_section_pieces_file: str, optional

The filename of the file containing metadata about image patches. Default is “section_pieces.pkl”.

image_mask_compression: float, optional

The degree of compression applied to the image mask. Default is 8.

dirname: str, optional

The directory where input and output files are stored. Default is “.”.

Graph Data

Functions for graph dataset generation.

arctic_ai.generate_graph.create_graph_data(basename='163_A1a', analysis_type='tumor', radius=256, min_component_size=600, no_component_break=False, dirname='.')[source]

Creates graph data for use in the GNN model for a given tissue slide.

Parameters:
basenamestr

The basename of the tissue slide to create graph data for.

analysis_typestr

The type of analysis to perform. Can be “tumor” or “macro”.

radiusint

The radius to use when creating the graph.

min_component_sizeint

The minimum size a connected component must be to be included in the graph data.

no_component_breakbool

Whether to include all connected components in the graph data, or just the largest one.

dirnamestr

The directory to save the graph data in.

Returns:
None

CNN

Contains functions related to generating embeddings for image patches using a convolutional neural network

class arctic_ai.cnn_prediction.CustomDataset(ID, patch_info, X, transform)[source]

Methods

embed

class arctic_ai.cnn_prediction.CustomDatasetOld(patch_info, npy_file, transform)[source]

Methods

embed

arctic_ai.cnn_prediction.generate_embeddings(basename='163_A1a', analysis_type='tumor', gpu_id=0, dirname='.')[source]

Generate embeddings for patches in a WSI.

Parameters:
basenamestr

Basename of the WSI.

analysis_typestr

Type of analysis to perform. Can be either “tumor” or “macro”.

gpu_idint, optional

GPU to use for training. If not provided, uses CPU.

dirnamestr, optional

Directory containing data for the WSI.

Returns:
None

The function saves the generated embeddings to the cnn_embeddings directory.

GNN

Graph neural network inference for tumor and completeness assessment.

class arctic_ai.gnn_prediction.GCNFeatures(gcn, bayes=False, p=0.05, p2=0.1)[source]

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(x, edge_index[, edge_attr])

Defines the computation performed at every call.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module's state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

ipu([device])

Moves all model parameters and buffers to the IPU.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_load_state_dict_post_hook(hook)

Registers a post hook to be run after module's load_state_dict is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict(*args[, destination, prefix, ...])

Returns a dictionary containing references to the whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(x, edge_index, edge_attr=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class arctic_ai.gnn_prediction.GCNNet(inp_dim, out_dim, hidden_topology=[32, 64, 128, 128], p=0.5, p2=0.1, drop_each=True)[source]

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(x, edge_index[, edge_attr])

Defines the computation performed at every call.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module's state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

ipu([device])

Moves all model parameters and buffers to the IPU.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_load_state_dict_post_hook(hook)

Registers a post hook to be run after module's load_state_dict is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict(*args[, destination, prefix, ...])

Returns a dictionary containing references to the whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(x, edge_index, edge_attr=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

arctic_ai.gnn_prediction.predict(basename='163_A1a', analysis_type='tumor', gpu_id=0, dirname='.')[source]

Run GNN prediction on patches.

Parameters:
basenamestr

Base name of the slide.

analysis_typestr

Type of analysis to run. Must be “tumor” or “macro”.

gpu_idint, optional

ID of the GPU to use. Default is 0.

dirnamestr, optional

Directory to save results to. Default is current directory.

Returns:
None

Image Writing

Contains functions for stitching images together.

class arctic_ai.image_stitch.Numpy2DZI(tile_size=254, tile_overlap=1, tile_format='jpg', image_quality=0.8, resize_filter=None, copy_metadata=False, compression=1.0)[source]

Methods

create(source_arr, destination)

Create a deep zoom image from the given source array.

get_image(level)

Returns the bitmap image at the given level.

tiles(level)

Iterator for all tiles in the given level.

create(source_arr, destination)[source]

Create a deep zoom image from the given source array.

Parameters:
source_arrndarray

The source image as a NumPy array.

destinationstr

The destination folder to save the tiles to.

Returns:
str

The destination folder where the tiles were saved.

3D Model

Contains functions for creating 3D models of gross specimen. Code is being refactored into another repository, to be released.

Tumor Mapping

Contains functions for creating tumor maps at surgical site. Will contain functions for optimal transport mapping. Code is currently being refactored and this section will be updated.

Serial Workflow

Contains functions for serial processing of tissue sections. Contains functions for defining and running workflows.

arctic_ai.workflow.run_workflow_series(basename, compression, overwrite, ext, dirname, df_section_pieces_file, run_stitch_slide)[source]

Runs image processing workflow in series on an input image.

Parameters:
basenamestr

The base name of the slide to process.

compressionfloat

The level of compression to apply to the slide.

overwritebool

Whether to overwrite existing files if they exist.

extstr

The file extension of the slide.

dirnamestr

The directory containing the slide and other relevant files.

df_section_pieces_filestr

The file containing information about the patches.

run_stitch_slidebool

Whether to run the stitch_slides function after all other processing is complete.

Returns:
timesdict

A dictionary containing the times at which each step of the workflow was completed.

Parallel Workflow

Contains functions for parallel processing of tissue sections.

arctic_ai.scale_workflow.run_parallel(patient='', input_dir='inputs', scheme='2/1', compression=6.0, overwrite=True, record_time=False, extract_dzi=False, ext='.tif', job_dir='./', restart=False, logfile='', loglevel='', cuda_visible_devices='$CUDA_VISIBLE_DEVICES', singularity_img_path='arcticai.img', run_slurm=False, cores=2, memory='60G', disk='3G', cuda_device='$(($RANDOM % 4))', prepend_path='$(realpath ~)/.local/bin/', slurm_gpus=1, slurm_account='qdp-alpha', slurm_partition='v100_12', time=1, gpu_cmode='exclusive')[source]

Runs the image processing workflow in parallel across multiple tissue sections simultaneously.

Follicle Detection

Contains functions for detecting follicles in images.

arctic_ai.detection_workflows.follicle_detection.predict_hair_follicles(tumor_thres=0.3, patch_size=256, basename='340_A1a_ASAP', alpha_scale=2.0, alpha=[1.0, 2.0, 3.0], model_path='model_final.pth', model_dir='./output', detectron_threshold=0.55, dirname='../../bcc_test_set/', ext='.tif', batch_size=16, num_workers=1)[source]

Predict hair follicles in a given tumor image using a pre-trained GNN model.

Args: - tumor_thres (float): Threshold value for the tumor prediction probability (default 0.3). - patch_size (int): Size of the image patches used for prediction (default 256). - basename (str): Name of the input image file without the extension (default “340_A1a_ASAP”). - alpha_scale (float): Scaling factor for the alpha values used in GNN, reduces the tumor prediction probability (default 2.0). - alpha (list): List of 3 alpha values used in GNN to reduce tumor probability in presence of follicles (default [1., 2., 3.]). - model_path (str): Path to the pre-trained GNN model file (default “model_final.pth”). - model_dir (str): Directory path to save the GNN model (default “./output”). - detectron_threshold (float): Threshold value for object detection (default 0.55). - dirname (str): Directory path to the input image file (default “../../bcc_test_set/”). - ext (str): Extension of the input image file (default “.tif”). - batch_size (int): Batch size used for prediction (default 16). - num_workers (int): Number of worker processes for data loading (default 1).

Returns: - None.

Nuclei Detection

Example command:

python cli.py detect_from_patches –predictor_dir=. –predictor_file=./ArcticAI_Detection/output_a=0_128_more_gr_pretrained_d2s/model_final.pth –patch_file=./ArcticAI_Detection/ex4/spline/patches/131_B1e.npy –threshold=0.05

arctic_ai.detection_workflows.nuclei_cli.detect_from_patches(predictor_dir='./', predictor_file='', patch_file='', classifier_type=<class 'arctic_ai.detection_workflows.predict_detection.PanopticNucleiPredictor'>, threshold=0.05, savenpy='pred.npy', savexml=None, patch_coords=None)[source]
Parameters:
  • predictor_dir (str) – path to model folder

  • predictor_file (str) – filename of model in the folder (don’t include path to folder)

  • patch_file (str) – path to an npy stack of patches

  • classifier_type (BasePredictor) – class (from predict.py)

  • panoptic (bool) – whether the model performs panoptic segmentation. If false, it is assumed to do instance segmentation.

  • n (int) – number of classes the model classifies into

  • threshold (float) – threshold to use for model

  • savenpy (str) – Path to file to which to save npy output. The output is a numpy stack of the the masks for each patch. In the mask, nuclei are given a non-zero integer, the ID of the the instance it is a part of. A pickled dictionary mapping the instance ID to the class label is also outputted. If savenpy=None, predictions are not saved in an npy format.

  • savexml (str) – Path to file to which to save xml output (ASAP format). If savexml=None, predictions are not saved in an xml format.

  • patch_coords (str) – an extra pkl file which specifies the x,y (x is row, y is col) metadata for the patches. Must be provided if exporting to xml, since location is a part of the ASAP format

arctic_ai.detection_workflows.nuclei_cli.extract_nuclei_patch(image, mask, coord, size)[source]

Get a subsection of the image with nuclei at the center :param image: the whole slide image :param mask: nuclei mask :param coord: position if the form (row, column) of the top left corner of the mask in the image :param size: size of the patch to extract (size x size)

arctic_ai.detection_workflows.nuclei_cli.get_diverse_patches(slide, size=128, thresh=0.5)[source]

get benign, inflammatory, and bcc patch for a slide

arctic_ai.detection_workflows.nuclei_cli.prettify(elem)[source]

Return a pretty-printed XML string for the Element.

arctic_ai.detection_workflows.nuclei_cli.run(predictor, patches, threshold=0.2)[source]

Returns a list of tuples. Tuples are of the form (masks for patch, associated labels for patch) :param predictor: model :param patches: must be BGR :param duplicate: if set to False, removes duplication

arctic_ai.detection_workflows.nuclei_cli.run_for_patch_with_cnn(predictor, cnn, image, coord, size, duplicate=False)[source]
Parameters:
  • predictor – model

  • cnn – class predictor

  • image – whole slide image

  • coord – (row, col) of the top left corner of the patch

  • size – size of patches to pass through predictor

  • duplicate – if set to False, removes duplication

Nuclei Detection

Functions to support cell localization.

Nuclei GNN

Functions to support cell-graph neural network inference. Cell level CNN embedding code is being refactored and will be added.

class arctic_ai.detection_workflows.nuclei_gnn.GCNFeatures(gcn, bayes=False)[source]

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(x, edge_index[, edge_attr])

Defines the computation performed at every call.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module's state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

ipu([device])

Moves all model parameters and buffers to the IPU.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_load_state_dict_post_hook(hook)

Registers a post hook to be run after module's load_state_dict is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict(*args[, destination, prefix, ...])

Returns a dictionary containing references to the whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(x, edge_index, edge_attr=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class arctic_ai.detection_workflows.nuclei_gnn.GCNNet(inp_dim, out_dim, hidden_topology=[32, 64, 128, 128], p=0.5, p2=0.1, drop_each=True)[source]

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(x, edge_index[, edge_attr])

Defines the computation performed at every call.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module's state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

ipu([device])

Moves all model parameters and buffers to the IPU.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_load_state_dict_post_hook(hook)

Registers a post hook to be run after module's load_state_dict is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict(*args[, destination, prefix, ...])

Returns a dictionary containing references to the whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(x, edge_index, edge_attr=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Nuclei Detection Utils

Additional functions to support cell detection.

sphinx.environment.BuildEnvironment(app)

The environment in which the ReST files are translated.

Indices and tables