text module
- class text.TextAnalyzer(csv_path: str, column_key: str = None, csv_encoding: str = 'utf-8')
Bases:
object
Used to get text from a csv and then run the TextDetector on it.
- read_csv() dict
Read the CSV file and return the dictionary with the text entries.
- Returns:
dict – The dictionary with the text entries.
- class text.TextDetector(subdict: dict, analyse_text: bool = False, skip_extraction: bool = False, accept_privacy: str = 'PRIVACY_AMMICO')
Bases:
AnalysisMethod
- analyse_image() dict
Perform text extraction and analysis of the text.
- Returns:
dict – The updated dictionary with text analysis results.
- get_text_from_image()
Detect text on the image using Google Cloud Vision API.
- remove_linebreaks()
Remove linebreaks from original and translated text.
- set_keys() dict
Set the default keys for text analysis.
- Returns:
dict – The dictionary with default text keys.
- translate_text()
Translate the detected text to English using the Translator object.
- text.privacy_disclosure(accept_privacy: str = 'PRIVACY_AMMICO')
Asks the user to accept the privacy statement.
- Parameters:
accept_privacy (str) – The name of the disclosure variable (default: “PRIVACY_AMMICO”).
summary module
multimodal search module
faces module
- class faces.EmotionDetector(subdict: dict, emotion_threshold: float = 50.0, race_threshold: float = 50.0, gender_threshold: float = 50.0, accept_disclosure: str = 'DISCLOSURE_AMMICO')
Bases:
AnalysisMethod
- analyse_image() dict
Performs facial expression analysis on the image.
- Returns:
dict – The updated subdict dictionary with analysis results.
- analyze_single_face(face: ndarray) dict
Analyzes the features of a single face on the image.
- Parameters:
face (np.ndarray) – The face image array.
- Returns:
dict – The analysis results for the face.
- clean_subdict(result: dict) dict
Cleans the subdict dictionary by converting results into appropriate formats.
- Parameters:
result (dict) – The analysis results.
- Returns:
dict – The updated subdict dictionary.
- facial_expression_analysis() dict
Performs facial expression analysis on the image.
- Returns:
dict – The updated subdict dictionary with analysis results.
- set_keys() dict
Sets the initial parameters for the analysis.
- Returns:
dict – The dictionary with initial parameter values.
- wears_mask(face: ndarray) bool
Determines whether a face wears a mask.
- Parameters:
face (np.ndarray) – The face image array.
- Returns:
bool – True if the face wears a mask, False otherwise.
- faces.deepface_symlink_processor(name)
- faces.ethical_disclosure(accept_disclosure: str = 'DISCLOSURE_AMMICO')
Asks the user to accept the ethical disclosure.
- Parameters:
accept_disclosure (str) – The name of the disclosure variable (default: “DISCLOSURE_AMMICO”).
color_analysis module
- class colors.ColorDetector(subdict: dict, delta_e_method: str = 'CIE 1976')
Bases:
AnalysisMethod
- analyse_image()
Uses the colorgram library to extract the n most common colors from the images. One problem is, that the most common colors are taken before beeing categorized, so for small values it might occur that the ten most common colors are shades of grey, while other colors are present but will be ignored. Because of this n_colors=100 was chosen as default.
The colors are then matched to the closest color in the CSS3 color list using the delta-e metric. They are then merged into one data frame. The colors can be reduced to a smaller list of colors using the get_color_table function. These colors are: “red”, “green”, “blue”, “yellow”,”cyan”, “orange”, “purple”, “pink”, “brown”, “grey”, “white”, “black”.
- Returns:
dict – Dictionary with color names as keys and percentage of color in image as values.
- rgb2name(c, merge_color: bool = True, delta_e_method: str = 'CIE 1976') str
Take an rgb color as input and return the closest color name from the CSS3 color list.
- Parameters:
c (Union[List,tuple]) – RGB value.
merge_color (bool, Optional) – Whether color name should be reduced, defaults to True.
- Returns:
str – Closest matching color name.
- set_keys() dict
cropposts module
utils module
- class utils.AnalysisMethod(subdict: dict)
Bases:
object
Base class to be inherited by all analysis methods.
- analyse_image()
- set_keys()
- class utils.DownloadResource(**kwargs)
Bases:
object
A remote resource that needs on demand downloading.
We use this as a wrapper to the pooch library. The wrapper registers each data file and allows prefetching through the CLI entry point ammico_prefetch_models.
- get()
- resources = []
- utils.ammico_prefetch_models()
Prefetch all the download resources
- utils.append_data_to_dict(mydict: dict) dict
Append entries from nested dictionaries to keys in a global dict.
- utils.dump_df(mydict: dict) DataFrame
Utility to dump the dictionary into a dataframe.
- utils.find_files(path: str = None, pattern=['png', 'jpg', 'jpeg', 'gif', 'webp', 'avif', 'tiff'], recursive: bool = True, limit=20, random_seed: int = None) dict
Find image files on the file system.
- Parameters:
path (str, optional) – The base directory where we are looking for the images. Defaults to None, which uses the ammico data directory if set or the current working directory otherwise.
pattern (str|list, optional) – The naming pattern that the filename should match. Use either ‘.ext’ or just ‘ext’ Defaults to [“png”, “jpg”, “jpeg”, “gif”, “webp”, “avif”,”tiff”]. Can be used to allow other patterns or to only include specific prefixes or suffixes.
recursive (bool, optional) – Whether to recurse into subdirectories. Default is set to True.
limit (int/list, optional) – The maximum number of images to be found. Provide a list or tuple of length 2 to batch the images. Defaults to 20. To return all images, set to None or -1.
random_seed (int, optional) – The random seed to use for shuffling the images. If None is provided the data will not be shuffeled. Defaults to None.
- Returns:
dict – A nested dictionary with file ids and all filenames including the path.
- utils.get_color_table()
- utils.get_dataframe(mydict: dict) DataFrame
- utils.initialize_dict(filelist: list) dict
Initialize the nested dictionary for all the found images.
- Parameters:
filelist (list) – The list of files to be analyzed, including their paths.
- Returns:
dict – The nested dictionary with all image ids and their paths.
- utils.is_interactive()
Check if we are running in an interactive environment.
- utils.iterable(arg)
display module
- class display.AnalysisExplorer(mydict: dict)
Bases:
object
- run_server(port: int = 8050) None
Run the Dash server to start the analysis explorer.
- Parameters:
port (int, optional) – The port number to run the server on (default: 8050).
- update_picture(img_path: str)
Callback function to update the displayed image.
- Parameters:
img_path (str) – The path of the selected image.
- Returns:
Union[PIL.PngImagePlugin, None] – The image object to be displayed or None if the image path is