🔍 Extract Field Names Containing 'type' (Integer Fields Without Domain) from GDB Using ArcPy

  ⚙️ How the Script Works 🗂️ Geodatabase Setup The script starts by pointing to a target File Geodatabase (.gdb) and initializing a CSV ...

Wednesday, May 14, 2025

Automating Feature Class Count and Metadata Export with Python and ArcPy

 

Automating Feature Class Count and Metadata Export with Python and ArcPy

If you are working with large geospatial datasets in Esri’s ArcGIS, keeping track of the features within each geodatabase is essential. For example, you may need a quick overview of feature class types, their counts, and the shape types across all geodatabases in a directory. In this blog post, I’ll show you how to automate the process of gathering feature class metadata and exporting it to a CSV file using Python and ArcPy.

The solution I’ll demonstrate scans all geodatabases in a specified directory, collects metadata (such as the feature class name, dataset, shape type, and feature count), and exports it to a CSV file. This approach helps you quickly summarize key attributes of all feature classes in your project.

Use Case

You might need this kind of automation when:

  • You want to document all feature classes in your geodatabases.

  • You need to review the shape types and feature counts of all geospatial data in a project.

  • You’re preparing reports or verifying data integrity across large geodatabases.

The Python script below automates this process by walking through a directory of geodatabases and saving the results in a CSV file for later use.

Code:

python
import arcpy import os import csv # Define the folder path containing the geodatabases folder_path = r'PATH_TO_YOUR_GEODATABASES_FOLDER' # Update with your geodatabase folder path # Define the output CSV file path csv_file = r'PATH_TO_YOUR_OUTPUT_CSV' # Update with your desired CSV file path # Define the CSV headers csv_headers = ['GDB Name', 'Dataset', 'Featureclass Name', 'Shape Type', 'Count'] # Open the CSV file for writing with utf-8 encoding with open(csv_file, mode='w', newline='', encoding='utf-8') as file: writer = csv.writer(file) writer.writerow(csv_headers) # Walk through the directory tree using os.walk() for root, dirs, files in os.walk(folder_path): for dir_name in dirs: if dir_name.endswith('.gdb'): # Check if the directory is a geodatabase gdb_path = os.path.join(root, dir_name) # Set the workspace to the current GDB arcpy.env.workspace = gdb_path print(f"Processing {dir_name}") # List all standalone feature classes (those not in datasets) standalone_featureclasses = arcpy.ListFeatureClasses() for fc in standalone_featureclasses: fc_path = os.path.join(arcpy.env.workspace, fc) desc = arcpy.Describe(fc_path) # Get the shape type and feature count shape_type = desc.shapeType count = arcpy.GetCount_management(fc_path)[0] # Write to CSV (No dataset for standalone feature classes) writer.writerow([dir_name, 'None', fc, shape_type, count]) # List all feature datasets in the geodatabase datasets = arcpy.ListDatasets('', 'Feature') # If datasets exist, iterate through them if datasets: for dataset in datasets: # List feature classes in each dataset dataset_featureclasses = arcpy.ListFeatureClasses(feature_dataset=dataset) for fc in dataset_featureclasses: fc_path = os.path.join(arcpy.env.workspace, dataset, fc) desc = arcpy.Describe(fc_path) # Get the shape type and feature count shape_type = desc.shapeType count = arcpy.GetCount_management(fc_path)[0] # Write to CSV writer.writerow([dir_name, dataset, fc, shape_type, count]) print(f"CSV created successfully at {csv_file}")

How the Script Works:

  1. Set Folder and CSV Paths:

    • You need to define the path where your geodatabases are stored (folder_path) and the location of the output CSV file (csv_file).

  2. CSV Headers:

    • The script writes a header row into the CSV that includes:

      • GDB Name: The name of the geodatabase.

      • Dataset: The name of the feature dataset (if applicable).

      • Featureclass Name: The name of the feature class.

      • Shape Type: The geometry type (e.g., point, line, polygon).

      • Count: The number of features in the feature class.

  3. Directory Traversal:

    • The os.walk() function walks through the folder that contains the geodatabases. If a directory ends with .gdb, it processes that geodatabase.

  4. Standalone Feature Classes:

    • The script first lists standalone feature classes (those not in any dataset) within each geodatabase and gets their shape type and feature count.

  5. Feature Datasets:

    • If feature datasets exist in the geodatabase, the script processes each feature class within the dataset, retrieves the metadata, and writes the information to the CSV.

  6. Writing to CSV:

    • For each feature class, the script writes a new row in the CSV with the collected metadata, including the shape type and count.

Benefits of Using This Script:

  • Automation: It automatically collects and exports metadata for all feature classes in a geodatabase, eliminating the need for manual tracking.

  • Documentation: The script generates a well-structured CSV that can be used for documentation or reports.

  • Batch Processing: Whether you have a few geodatabases or hundreds, this script handles them all in a batch process, saving you valuable time.

  • Versatility: You can easily modify the script to capture other metadata or make it work with different data sources.

Conclusion:

Managing geospatial data across multiple geodatabases can become a daunting task without the right tools. By automating the process of collecting feature class metadata and exporting it into a CSV, you can gain deeper insights into your datasets with minimal effort. This script, leveraging ArcPy and Python, ensures that you can process large amounts of data efficiently and keep track of important details such as feature counts, shape types, and more.

Feel free to adjust the file paths and adapt the script to fit your specific project needs. This tool is perfect for data management, quality assurance, and reporting tasks!