Caliper

Auxillary Management

Configuration and Administration

Configure schema profiles, execute data tooling workflows, and manage user access from one operational reference.

Management focus

  • Schema mapping standards

    Define canonical field mappings and transforms across data source variants.

  • Data tools operations

    Run dataset exports/imports and migration workflows from a single administrative surface.

  • Import pipeline diagnostics

    Inspect schema profiles, batches, jobs, and row-level validation output for parcel workflows.

  • User access controls

    Manage login paths, user roles, and permission groups for secure platform administration.

Auxillary Management

Schema, data, and access management

Use this page for prerequisite schema setup, administrative data operations, and user account governance.

Parcel Schema Profiles

This section is a required prerequisite for Loading Property Account Information.

What Is a ParcelSchemaProfile?

A ParcelSchemaProfile is a named, reusable translation table that tells Pie how to read a vendor's CSV export. Because different county assessors and GIS platforms (such as Esri/ArcGIS) use their own column naming conventions, Pie does not assume a fixed column layout. Instead, every import requires a profile that explicitly maps each CSV column header to one of Pie's canonical internal fields.

Without a profile, Pie cannot interpret any parcel data - the import pipeline will refuse to proceed.


Why This Matters for Esri / ArcGIS Exports

Property assessment data exported from Esri/ArcGIS (or any GIS-based assessor platform) typically has column names that reflect the platform's own data model rather than any universal standard. For example, a situs address column might be called SiteAddress, SitusAddr, PropertyAddress, or something else entirely depending on how the jurisdiction has configured their system.

A ParcelSchemaProfile bridges this gap:

  • It records the exact column names present in the vendor export.
  • It specifies how each value should be parsed or normalized (for example, trimming whitespace, parsing a decimal currency field, converting a date from epoch milliseconds).
  • It marks which columns are required for a row to be considered valid.

Profiles are reusable across tax years - once configured for a given assessor extract layout, the same profile can be used for every annual import as long as the column structure does not change.


Creating a Profile

Step 1 - Open the admin page

Go to Django admin -> Parcel Column Import Manager.

This section is visible to superusers only.

Step 2 - Add a new profile

Click Add Parcel Schema Profile and fill in:

FieldDescription
NameA short, unique identifier for the profile (for example, esri-default). Used to select the profile during import.
VendorThe data source platform. Currently the only choice is Esri / ArcGIS.
DescriptionOptional free-text note describing the data source or county.
ActiveMust be checked for the profile to appear in import dropdowns.

Step 3 - Add column mappings

Under Column Mappings, add one row per CSV column you want to import. Each row has:

FieldDescription
Source FieldThe exact column header as it appears in the CSV. Case and spacing must match precisely.
Canonical FieldThe Pie field this column maps to (choose from the dropdown - see the full list below).
TransformHow the raw string value should be parsed before storing (see transforms below).
RequiredIf checked, rows without this field will be marked invalid. Check this for account at minimum.
Friendly LabelDisplay-only label shown in the admin UI. Does not affect import logic.

Tip: After saving, use the Auto-fill Friendly Labels action to populate any blank friendly labels automatically from the canonical field name.

Step 4 - Save

Save the profile. It is immediately available for use in any import method (UI or CLI).


Importing and Exporting Profiles via JSON

Profiles can be exported to a JSON file and re-imported on another environment (for example, from local to production, or to back up a configuration). This avoids manual re-entry.

Export

  1. Go to Data Tools.
  2. Under the schema profile export section, select the profile and click Export.
  3. A .json file is downloaded in the pie.parcel-schema-profile format.

Import

  1. Go to Data Tools.
  2. Under the schema profile import section, upload the .json file.
  3. Pie validates the file and upserts the profile by name:
    • If the profile name does not exist, it is created.
    • If it already exists, its vendor/description/active fields are updated and all existing column mappings are replaced with the imported ones.

Warning: Importing a profile with an existing name will delete and replace all of its current column mappings. This is intentional - the import is designed to be idempotent.


The esri-default Profile (Reference)

The esri-default profile is the standard mapping for Esri/ArcGIS property assessment exports. Its column mappings are:

CSV Column (Source Field)Canonical FieldTransform
Accountaccountstrip - Required
AgriculturalAssessedValueag_assessed_valuedecimal
AgriculturalImprovementValueag_improvement_valuedecimal
AgriculturalLandValueag_land_valuedecimal
CommercialAssessedValuecom_assessed_valuedecimal
CommercialImprovementValuecom_improvement_valuedecimal
CommercialLandValuecom_land_valuedecimal
MailingAddressmailing_addressstrip
Municipalitymunicipalitystrip
Ownerowner_namestrip
Parcel_IDparcel_idstrip
ParcelReportparcel_report_urlstrip
PrevOwner1prev_owner_1strip
ResidentialAssessedValueres_assessed_valuedecimal
ResidentialImprovementValueres_improvement_valuedecimal
ResidentialLandValueres_land_valuedecimal
SalePrice1sale_price_1decimal
SiteAddresssitus_addressstrip
SitusZipsitus_zipstrip
TotalMarketValueassessor_total_market_valuedecimal

Note that assessor_total_assessed_value is not imported directly - it is auto-computed by the pipeline from the sum of residential, commercial, and agricultural assessed values if not explicitly provided.

The JSON representation of this profile (suitable for import) follows the pie.parcel-schema-profile format:

{
  "format": "pie.parcel-schema-profile",
  "format_version": 1,
  "profile": {
    "name": "esri-default",
    "vendor": "esri",
    "description": "Default ESRI mapping",
    "active": true,
    "field_maps": [
      { "source_field": "Account", "canonical_field": "account", "transform": "strip", "required": true, "friendly_label": "Account" },
      { "source_field": "AgriculturalAssessedValue", "canonical_field": "ag_assessed_value", "transform": "decimal", "required": false, "friendly_label": "Ag Assessed Value" },
      { "source_field": "AgriculturalImprovementValue", "canonical_field": "ag_improvement_value", "transform": "decimal", "required": false, "friendly_label": "Ag Improvement Value" },
      { "source_field": "AgriculturalLandValue", "canonical_field": "ag_land_value", "transform": "decimal", "required": false, "friendly_label": "Ag Land Value" },
      { "source_field": "CommercialAssessedValue", "canonical_field": "com_assessed_value", "transform": "decimal", "required": false, "friendly_label": "Com Assessed Value" },
      { "source_field": "CommercialImprovementValue", "canonical_field": "com_improvement_value", "transform": "decimal", "required": false, "friendly_label": "Com Improvement Value" },
      { "source_field": "CommercialLandValue", "canonical_field": "com_land_value", "transform": "decimal", "required": false, "friendly_label": "Com Land Value" },
      { "source_field": "MailingAddress", "canonical_field": "mailing_address", "transform": "strip", "required": false, "friendly_label": "Mailing Address" },
      { "source_field": "Municipality", "canonical_field": "municipality", "transform": "strip", "required": false, "friendly_label": "Municipality" },
      { "source_field": "Owner", "canonical_field": "owner_name", "transform": "strip", "required": false, "friendly_label": "Owner Name" },
      { "source_field": "Parcel_ID", "canonical_field": "parcel_id", "transform": "strip", "required": false, "friendly_label": "Parcel Id" },
      { "source_field": "ParcelReport", "canonical_field": "parcel_report_url", "transform": "strip", "required": false, "friendly_label": "Parcel Report Url" },
      { "source_field": "PrevOwner1", "canonical_field": "prev_owner_1", "transform": "strip", "required": false, "friendly_label": "Prev Owner 1" },
      { "source_field": "ResidentialAssessedValue", "canonical_field": "res_assessed_value", "transform": "decimal", "required": false, "friendly_label": "Res Assessed Value" },
      { "source_field": "ResidentialImprovementValue", "canonical_field": "res_improvement_value", "transform": "decimal", "required": false, "friendly_label": "Res Improvement Value" },
      { "source_field": "ResidentialLandValue", "canonical_field": "res_land_value", "transform": "decimal", "required": false, "friendly_label": "Res Land Value" },
      { "source_field": "SalePrice1", "canonical_field": "sale_price_1", "transform": "decimal", "required": false, "friendly_label": "Sale Price 1" },
      { "source_field": "SiteAddress", "canonical_field": "situs_address", "transform": "strip", "required": false, "friendly_label": "Situs Address" },
      { "source_field": "SitusZip", "canonical_field": "situs_zip", "transform": "strip", "required": false, "friendly_label": "Situs Zip" },
      { "source_field": "TotalMarketValue", "canonical_field": "assessor_total_market_value", "transform": "decimal", "required": false, "friendly_label": "Assessor Total Market Value" }
    ]
  },
  "compatibility": {
    "dropped_legacy_mappings": []
  }
}

Canonical Fields Reference

These are all the fields Pie can store on a Parcel record. Every column mapping must point to one of these.

Canonical FieldDescriptionTypical Transform
accountAccount number - requiredstrip
parcel_idParcel / APN identifierstrip
owner_nameProperty owner namestrip
situs_addressProperty situs (physical) addressstrip
mailing_addressOwner mailing address (full string)strip
municipalityMunicipality / taxing jurisdictionstrip
situs_zipSitus ZIP codestrip
assessor_total_market_valueTotal assessor market valuedecimal
assessor_total_assessed_valueTotal assessor assessed value (auto-computed if absent)decimal
res_assessed_valueResidential assessed valuedecimal
res_improvement_valueResidential improvement valuedecimal
res_land_valueResidential land valuedecimal
com_assessed_valueCommercial assessed valuedecimal
com_improvement_valueCommercial improvement valuedecimal
com_land_valueCommercial land valuedecimal
ag_assessed_valueAgricultural assessed valuedecimal
ag_improvement_valueAgricultural improvement valuedecimal
ag_land_valueAgricultural land valuedecimal
prev_owner_1Previous owner namestrip
sale_price_1Most recent sale pricedecimal
sale_date_1Most recent sale datedate
parcel_report_urlURL to external assessor parcel reportstrip

assessor_total_assessed_value is automatically computed as res_assessed_value + com_assessed_value + ag_assessed_value if it is not mapped or the mapped value is blank.


Transforms Reference

TransformBehaviorUse For
identityRaw value, no changeAny field where no normalization is needed
stripStrip leading/trailing whitespaceText fields - addresses, names, IDs
upperConvert to uppercaseAccount numbers or codes requiring uniform casing
lowerConvert to lowercase
decimalParse as a decimal number; handles commas (for example, 1,234.56)All currency / value fields
intParse as an integerInteger counts
dateParse common date formats: YYYY-MM-DD, MM/DD/YYYY, MM/DD/YY, YYYY/MM/DD; strips Esri timezone suffixes (for example, 2026-01-15T00:00:00.000Z)Date fields from most assessor exports
date_epoch_msConvert epoch milliseconds (integer) to a dateDate fields from Esri feature service JSON exports

Tips for Working With Esri / ArcGIS Exports

  1. Open the CSV in a text editor first. Confirm the exact header row. Esri exports sometimes include non-breaking spaces (\u00a0) in column names - Pie strips these automatically during ingest, but confirming headers upfront prevents confusion.

  2. Match source_field exactly. Source field matching is case-sensitive and whitespace-sensitive. If the CSV header is Parcel_ID (with underscore), the source_field must be Parcel_ID, not ParcelID or parcel_id.

  3. Use decimal for all currency/value columns. Esri exports often format numbers with commas (1,234,567.00). The decimal transform handles this; identity will not parse it as a number.

  4. Date columns from feature services use epoch ms. If you are exporting directly from an Esri feature service (REST API) rather than a file geodatabase export, date values may be integer epoch milliseconds (for example, 1737936000000). Use date_epoch_ms for those columns. Standard file exports typically use YYYY-MM-DD strings; use date for those.

  5. Only map the columns you have. You do not need to map every canonical field. Unmapped canonical fields will simply be blank on the resulting Parcel record. Map only what your assessor extract actually provides.

  6. One profile per extract layout. If different counties use different column names, create a separate profile for each. Profiles are cheap to create and the name makes it clear which profile to use for which source.

  7. Re-use across tax years. A profile does not encode a tax year. The same esri-default profile works for 2025, 2026, and beyond as long as the column structure has not changed.


Key Code Locations

PathDescription
appeals/models.pyParcelSchemaProfile, ParcelFieldMap, CANONICAL_FIELDS, ParcelTransform
appeals/admin.pyParcelSchemaProfileAdmin, ParcelFieldMapInline
appeals/views.pyexport_parcel_schema_profile(), import_parcel_schema_profile(), _parcel_schema_export_payload()
appeals/services/parcel_import_pipeline.py_load_field_map(), _transform_value() - where the profile is consumed during import

Data Tools (/admin/data-tools/)

What Is Data Tools?

Data Tools is the central admin page for bulk data operations in Pie. It provides export and import controls for every major dataset, parcel-specific import tooling, schema profile management, and a full-instance migration system. All actions on this page require Django admin staff access — every route is wrapped in admin.site.admin_view().


Page Layout

The page is divided into the following sections, in order:

  1. Parcel Background Jobs — conditionally shown; lists the 10 most recent parcel import jobs with status and row counters. Refresh the page to update.
  2. Dataset Cards — side-by-side export/import cards for every standard dataset.
  3. Parcel Schema Export / Import — JSON transfer of a ParcelSchemaProfile between environments.
  4. Migration Ledger Export / Import — full-instance data migration via a deterministic JSON snapshot.
  5. Database Manifest Export — export-only manifest of every Pie model/table.
  6. Audit Logs Export — export-only full audit log.
  7. Parameter Manifest Export — export-only field-level schema manifest.
  8. Parcel Sections (bottom of page):
    • Parcels.csv Import (Upload File)
    • Parcels.csv Import (Azure Blob URL)
    • Dump Parcel Records

Parcel Background Jobs

When any parcel import job is active or has run recently, a status table appears at the top of the page showing:

ColumnDescription
ActionVALIDATE, APPLY, BLOB_IMPORT, DUMP_PARCELS, or PURGE_RAW_ROWS
StatusQUEUED, RUNNING, SUCCESS, or FAILED
BatchThe ParcelImportBatch ID the job is working on (or -- for non-batch jobs)
Progresscurrent / total row count
MessageLatest progress message from the worker

Progress does not auto-refresh — reload the page to get the latest status.


Standard Dataset Cards

For each dataset below, there is an Export card (downloads a CSV immediately) and an Import card (uploads a CSV and applies it). Datasets with extra fields require additional form inputs before import.

DatasetExport FileImport BehaviorExtra Fields
AppealsAppeals.csvOverwrites existing appeal records by appeal_id
HearingsHearings.csvReplaces the hearings table; appeal_id column must reference valid Appeal IDs
DocumentsDocuments.csvReplaces the documents table
Document TypesDocumentTypes.csvReplaces the document types table
MeetingsMeetings.csvReplaces the meetings table
Meeting ItemsMeetingAppeal.csvReplaces the MeetingAppeal table
Board ActionsMotion.csvReplaces the Motion table
ContactsContacts.csvOverwrites all existing contacts
Appeal PartiesAppealParties.csvReplaces the appeal parties table
Action TypesActionTypes.csvReplaces the action types table
ServicesServices.csvReplaces the Services table by service_code; deletes rows not in the file
Prior Determination AccountsPriorDeterminations.csvReplaces all prior determination records for the selected year. Accepts PreviousAppeals.csv (with AppealNumber, AccountNumber, ReasonCode) or BlackList.csv (with AppealNumber, AccountNumber). Rows with ReasonCode 99 are skipped.Tax year
Schedule SlotScheduleSlot.csvReplaces the schedule slots table
Error ReferencesErrorReferences.csvUpserts records by error_reference_code
CommunicationsCommunications.csvCreates and updates communication records
CommunicationSubjectsCommunicationSubjects.csvCreates and updates communication subject rows

Export-only datasets (no import card):

DatasetExport FilePurpose
Audit LogsAuditLogs.csvFull audit log for review or archiving
Database ManifestDatabaseManifest.csvLists every model in Pie
Parameter ManifestParameterManifest.csvField-level manifest for every model/table including data types, nullability, and PK/FK linkage metadata

Parcel Schema Export / Import

Transfers a ParcelSchemaProfile (column mappings) between Pie environments as a JSON file.

Export

  • Select a profile from the dropdown and click Download ParcelSchemaProfile.json.
  • Produces a pie.parcel-schema-profile JSON file containing the profile settings and all column mappings.
  • URL: POST /admin/data-tools/export/parcel-schema/

Import

  • Upload a previously exported .json file and click Import ParcelSchemaProfile.json.
  • Matches by profile name:
    • If the name does not exist, a new profile is created.
    • If the name already exists, vendor/description/active are updated and all existing column mappings are deleted and replaced with the imported ones.
  • URL: POST /admin/data-tools/import/parcel-schema/

See parcel_schema_profile.md for the full reference on schema profiles.


Migration Ledger Export / Import

The Migration Ledger is a full-instance data snapshot in JSON format, designed for moving an entire Pie configuration from one environment to another (e.g., local → staging → production).

Export

  • Click Download MigrationLedger.json.
  • Produces a timestamped MigrationLedger_YYYYMMDD_HHMMSS.json file.
  • Excludes: audit log records, parcel staging tables (ParcelRawRow, ParcelImportBatch, ParcelImportJob), and document file bytes (document rows include metadata only, not the binary file content).
  • URL: GET /admin/data-tools/export/migration-ledger/

Import

  • Upload a MigrationLedger.json file and click Import MigrationLedger.json.
  • The import is:
    • Dependency-aware — processes models in foreign key dependency order.
    • Primary-key-preserving — records are created with their original IDs.
    • Many-to-many aware — M2M links are applied after all base rows are written.
    • Idempotent — rows are created or updated, not duplicated.
  • After import, Django messages report: models processed, created, updated, skipped, and error counts. Warnings and row errors (up to 8 each) are shown inline.
  • URL: POST /admin/data-tools/import/migration-ledger/

Parcels.csv Import (Upload File)

For importing a local parcel CSV directly through the browser.

Form fields:

  • Tax year — the assessment year for this import (e.g., 2026)
  • Schema profile — the ParcelSchemaProfile that maps the CSV column headers to canonical Parcel fields
  • File — the .csv file to upload

What happens on submit:

  1. The file is uploaded and a ParcelImportBatch is created.
  2. A background ParcelImportJob with action APPLY is queued (validate + apply runs automatically).
  3. The parcel_import_worker process must be running to pick up the job.

URL: POST /admin/data-tools/import/parcels/

For large files that may cause HTTP timeouts, use the Azure Blob URL method below instead.


Parcels.csv Import (Azure Blob URL)

For importing a parcel CSV that has been pre-uploaded to Azure Blob Storage. This method runs entirely in the background and avoids browser timeout issues.

Steps shown on the page:

  1. Upload the CSV to the doc container in Azure Storage.
  2. Copy the blob URL (starts with https://caliperdocuments.blob.core.windows.net/doc/).
  3. Fill in the form and click Import from Blob.
  4. Monitor progress in the Parcel Background Jobs section by refreshing the page.
  5. For detailed status and row counts, go to Parcel Teleporter (Start Import) (the ParcelImportBatch admin list).

Form fields:

  • Blob URL — full HTTPS URL to the blob (e.g., https://caliperdocuments.blob.core.windows.net/doc/imports/Parcels_2026.csv)
  • Tax year — the assessment year
  • Schema profile — the active ParcelSchemaProfile to use

What happens on submit:

  1. URL is validated (must be https://, must include container/blob path).
  2. A ParcelImportJob with action BLOB_IMPORT is queued.
  3. The worker downloads the file from blob storage, runs ingest → validate → apply automatically.
  4. Job status is stored in the session and shown at the top of the page on next load.

URL: POST /admin/data-tools/import/parcels-blob/


Dump Parcel Records

Wipes all parcel data from the database — Parcel, ParcelImportBatch, and ParcelRawRow records.

  • Clicking Dump Parcel Records opens a confirmation page (/admin/data-tools/dump-parcels/) that shows current record counts.
  • The confirmation page requires typing delete before the action proceeds.
  • Blocked if any Appeals exist — the dump cannot run while Appeal records are present.
  • On confirmation, queues a ParcelImportJob with action DUMP_PARCELS and logs an audit entry.
  • The actual deletion is executed by the parcel_import_worker background process.

This is irreversible. Use only when preparing to re-import a clean parcel dataset from scratch.


Access Control

Every URL under /admin/data-tools/ is wrapped in admin.site.admin_view(), which enforces Django admin staff authentication. Non-staff users are redirected to the admin login page.


URL Reference

ActionMethodURL
Data Tools pageGET/admin/data-tools/
Export AppealsGET/admin/data-tools/export/appeals/
Import AppealsPOST/admin/data-tools/import/appeals/
Export HearingsGET/admin/data-tools/export/hearings/
Import HearingsPOST/admin/data-tools/import/hearings/
Export DocumentsGET/admin/data-tools/export/documents/
Import DocumentsPOST/admin/data-tools/import/documents/
Export Document TypesGET/admin/data-tools/export/document-types/
Import Document TypesPOST/admin/data-tools/import/document-types/
Export MeetingsGET/admin/data-tools/export/meetings/
Import MeetingsPOST/admin/data-tools/import/meetings/
Export Meeting ItemsGET/admin/data-tools/export/meetingappeals/
Import Meeting ItemsPOST/admin/data-tools/import/meetingappeals/
Export Board ActionsGET/admin/data-tools/export/motions/
Import Board ActionsPOST/admin/data-tools/import/motions/
Export ContactsGET/admin/data-tools/export/contacts/
Import ContactsPOST/admin/data-tools/import/contacts/
Export Appeal PartiesGET/admin/data-tools/export/appeal-parties/
Import Appeal PartiesPOST/admin/data-tools/import/appeal-parties/
Export Action TypesGET/admin/data-tools/export/action-types/
Import Action TypesPOST/admin/data-tools/import/action-types/
Export ServicesGET/admin/data-tools/export/services/
Import ServicesPOST/admin/data-tools/import/services/
Export Prior DeterminationsGET/admin/data-tools/export/prior-determinations/
Import Prior DeterminationsPOST/admin/data-tools/import/prior-determinations/
Export Schedule SlotsGET/admin/data-tools/export/scheduleslots/
Import Schedule SlotsPOST/admin/data-tools/import/scheduleslots/
Export Error ReferencesGET/admin/data-tools/export/error-references/
Import Error ReferencesPOST/admin/data-tools/import/error-references/
Export CommunicationsGET/admin/data-tools/export/communications/
Import CommunicationsPOST/admin/data-tools/import/communications/
Export Communication SubjectsGET/admin/data-tools/export/communication-subjects/
Import Communication SubjectsPOST/admin/data-tools/import/communication-subjects/
Export Audit LogsGET/admin/data-tools/export/audit-logs/
Export Database ManifestGET/admin/data-tools/export/database-manifest/
Export Parameter ManifestGET/admin/data-tools/export/parameter-manifest/
Export Parcels CSVGET/admin/data-tools/export/parcels/
Import Parcels (file upload)POST/admin/data-tools/import/parcels/
Import Parcels (blob URL)POST/admin/data-tools/import/parcels-blob/
Export Parcel Schema ProfilePOST/admin/data-tools/export/parcel-schema/
Import Parcel Schema ProfilePOST/admin/data-tools/import/parcel-schema/
Export Migration LedgerGET/admin/data-tools/export/migration-ledger/
Import Migration LedgerPOST/admin/data-tools/import/migration-ledger/
Dump Parcel Records (confirm)GET/POST/admin/data-tools/dump-parcels/

Key Code Locations

PathDescription
appeals/views.pydata_tools() view and all export/import view functions
appealsys/urls.pyAll /admin/data-tools/ URL patterns
templates/admin/data_tools.htmlPage template
appeals/services/parcel_import_pipeline.pyPipeline called by parcel import views
appeals/services/migration_ledger.pyexport_migration_ledger_json() and import_migration_ledger_payload()

Parcel Column Import Manager (/admin/appeals/parcelschemaprofile/)

What Is This Page?

The Parcel Column Import Manager is the admin section for creating and editing ParcelSchemaProfile records. A schema profile is the translation table that tells Pie how to read a vendor's CSV export — mapping each source column header to one of Pie's canonical parcel fields and specifying how the raw value should be parsed.

Every parcel import requires a profile. This page is where profiles are built and maintained.

Access: Visible and accessible to superusers only. Staff users cannot see this section in the navigation even if they have admin access. This is enforced by get_model_perms() in ParcelSchemaProfileAdmin.


List View

URL: /admin/appeals/parcelschemaprofile/

Columns

ColumnDescription
NameThe profile identifier — used to select the profile during import
VendorThe data source platform (Esri / ArcGIS)
ActiveWhether the profile appears in import dropdowns on Data Tools and Validate Importations

Filters

  • Vendor — filter by data platform
  • Active — filter by active/inactive status

Search

Searches across name and description fields.

Action: Auto-fill Friendly Labels

Selecting one or more profiles and running Auto-fill Friendly Labels (leave existing values untouched) fills any blank friendly_label values on all column mappings for the selected profiles. Labels are generated by title-casing the canonical field name (e.g., res_assessed_valueRes Assessed Value). Mappings that already have a label are not touched.

This is a cosmetic operation — friendly labels are displayed in the UI but never used by the import pipeline.


Detail / Edit View

URL: /admin/appeals/parcelschemaprofile/<id>/change/

Profile Fields

FieldRequiredDescription
NameYesUnique identifier, max 128 characters. Used to select the profile at import time and to match by name during JSON import.
VendorYesCurrently the only choice is Esri / ArcGIS.
DescriptionNoFree-text note — useful for recording which county or data source this profile covers.
ActiveYesControls visibility in import dropdowns. Inactive profiles are hidden from Data Tools and Validate Importations but remain in the database.

Column Mappings Inline

Below the profile fields is the Column Mappings tabular inline. Each row maps one CSV column to one canonical Parcel field.

FieldRequiredDescription
Source FieldYesExact CSV column header as it appears in the file, including case and any underscores. Max 255 characters. Must be unique within the profile.
Canonical FieldYesThe Pie parcel field this column maps to. Selected from a fixed dropdown — see full list below.
RequiredIf checked, rows missing this field (or where the field is blank after transformation) are marked invalid during validation and not applied. At minimum, account should be required.
TransformYesHow the raw string value is parsed before storing — see transforms below.
Friendly LabelNoDisplay-only label shown in the UI. Does not affect import logic in any way. Can be filled automatically using the Auto-fill Friendly Labels action.

Rows are ordered by profile, then source field alphabetically.


Canonical Fields

The complete list of fields a column mapping can target:

Canonical FieldLabelNotes
accountAccountPrimary identifier — always required
parcel_idParcel ID
owner_nameOwner Name
situs_addressSitus AddressPhysical property address
mailing_addressMailing AddressFull mailing address string
municipalityMunicipalityTaxing jurisdiction / city
situs_zipSitus ZIP
assessor_total_assessed_valueAssessor Total Assessed ValueAuto-computed from res+com+ag if not mapped
assessor_total_market_valueAssessor Total Market Value
res_assessed_valueResidential Assessed Value
res_improvement_valueResidential Improvement Value
res_land_valueResidential Land Value
com_assessed_valueCommercial Assessed Value
com_improvement_valueCommercial Improvement Value
com_land_valueCommercial Land Value
ag_assessed_valueAgricultural Assessed Value
ag_improvement_valueAgricultural Improvement Value
ag_land_valueAgricultural Land Value
prev_owner_1Previous Owner 1
sale_price_1Sale Price 1
sale_date_1Sale Date 1
parcel_report_urlParcel Report URLURL to external assessor parcel detail page

assessor_total_assessed_value is automatically computed by the pipeline as res_assessed_value + com_assessed_value + ag_assessed_value if it is not present in the CSV or the mapped value is blank. You do not need to map it unless the source file provides it directly.


Transforms

TransformBehavior
identityRaw string value, no change
stripStrip leading and trailing whitespace
upperConvert to uppercase
lowerConvert to lowercase
decimalParse as decimal number — handles comma-formatted values (e.g., 1,234,567.00)
intParse as integer
dateParse common date formats: YYYY-MM-DD, MM/DD/YYYY, MM/DD/YY, YYYY/MM/DD; strips Esri timezone suffixes (e.g., 2026-01-15T00:00:00.000Z)
date_epoch_msConvert integer epoch milliseconds to a date — used for date columns from Esri feature service REST exports

Constraints and Validation

  • Name is unique across all profiles — two profiles cannot share the same name.
  • Source field is unique within a profile — a given CSV column can only be mapped once per profile. Attempting to add a duplicate source_field on the same profile raises a validation error.
  • The import pipeline only applies mappings to columns it finds in the CSV header. Mappings for columns not present in the file are silently skipped (unless the mapping is marked Required, in which case the row is marked invalid).
  • Mappings for legacy canonical fields (objectid, situs_name) that have been removed from the Parcel model are silently dropped during JSON export and import. They cannot be added via the admin dropdown as they are not in the CANONICAL_FIELDS list.

Relationship to Imports

When an import runs (via Data Tools, Validate Importations, or CLI), the selected profile is loaded by _load_field_map() in the pipeline service. The pipeline:

  1. Reads the ParcelFieldMap rows for the selected profile.
  2. For each CSV row, looks up each column header in the field map.
  3. Applies the specified transform to the raw value.
  4. Stores the result under the canonical field name in the normalized JSON.
  5. Checks required fields — marks the row invalid if any are missing or unparseable.

The profile is stored as a foreign key on the ParcelImportBatch record (schema_profile), so it is always possible to see which profile was used for a given import.


Export and Import via JSON

Profiles can be transferred between environments without manual re-entry using the JSON export/import on Data Tools.

  • Export: Data Tools → Parcel Schema Export → select profile → Download ParcelSchemaProfile.json
  • Import: Data Tools → Parcel Schema Import → upload .json file

The import matches by profile name and replaces all column mappings atomically. See parcel_schema_profile.md for the full JSON format reference.


Key Code Locations

PathDescription
appeals/admin.py:266ParcelFieldMapInline — inline column mapping editor
appeals/admin.py:282ParcelSchemaProfileAdmin — list display, filters, actions, superuser-only gate
appeals/models.py:790ParcelSchemaProfile model
appeals/models.py:809CANONICAL_FIELDS — authoritative list of valid canonical field names
appeals/models.py:835ParcelTransform — transform choices
appeals/models.py:846ParcelFieldMap model
appeals/services/parcel_import_pipeline.py_load_field_map(), _transform_value() — where the profile is consumed
appeals/views.py:1119LEGACY_PARCEL_CANONICAL_FIELD_ALIASES — dropped legacy field names
appeals/views.py:1135_parcel_schema_export_payload() — JSON export logic
appeals/views.py:1324import_parcel_schema_profile() — JSON import logic

Parcel Import Jobs (/admin/appeals/parcelimportjob/)

What Is This Page?

Parcel Import Jobs is the admin section for monitoring and inspecting background jobs in the parcel import pipeline. Every validate, apply, blob import, dump, and raw-row purge operation runs as a ParcelImportJob record. This page is the authoritative history of all background parcel operations — what ran, when, who queued it, and what happened.

Jobs are created (queued) by:

  • Data Tools — Parcels.csv Import (file upload or Azure Blob URL)
  • Data Tools — Dump Parcel Records
  • Validate Importations — Queue Validation / Queue Apply buttons and bulk actions
  • Parcel Row Import Manager — Queue background purge action

The parcel_import_worker management command is the process that picks up and executes queued jobs. Nothing runs unless the worker is running.

Access: Viewable by all staff users. No staff user can add or manually edit a job — the admin is entirely read-only. Jobs are created only through the queue system.


List View

URL: /admin/appeals/parcelimportjob/

Jobs are ordered newest first (-created_at).

Columns

ColumnDescription
IDAuto-assigned job identifier
ActionWhat the job does — see action types below
StatusCurrent lifecycle state — see statuses below
BatchThe ParcelImportBatch this job operates on, if applicable. Blank for DUMP_PARCELS and PURGE_RAW_ROWS
Progress CurrentRows processed so far
Progress TotalTotal rows to process
Progress MessageLatest message from the worker (e.g., Validating, Applying, Completed, Failed)
Created AtWhen the job was queued
Started AtWhen the worker claimed and began the job
Finished AtWhen the job completed or failed

Filters

  • Action — filter by job type
  • Status — filter by lifecycle state

Search

Searches by job ID or batch ID.


Action Types

ActionWhat It Does
VALIDATERuns the validate/normalize phase on a batch — applies field map transforms, marks each ParcelRawRow as valid or invalid, updates batch row counts
APPLYRuns the apply/upsert phase — writes valid rows to Parcel records. Auto-validates first if the batch is not yet in VALIDATED status
BLOB_IMPORTDownloads a CSV from Azure Blob Storage, then runs ingest → validate → apply as a single background operation
DUMP_PARCELSDeletes all Parcel, ParcelRawRow, and ParcelImportBatch records in chunked batches. Blocked if any Appeals exist
PURGE_RAW_ROWSDeletes ParcelRawRow records for one or more specific batches. Leaves the Parcel and ParcelImportBatch records intact

Job Statuses

StatusMeaning
QUEUEDCreated and waiting for the worker to pick it up
RUNNINGClaimed by the worker; currently executing
SUCCESSCompleted without error. progress_message = Completed
FAILEDAn exception was raised. last_error contains the exception message

Status transitions are always: QUEUEDRUNNINGSUCCESS or FAILED. There is no retry — a failed job remains failed. To retry, a new job must be queued (e.g., by clicking Queue Validation or Queue Apply again on the batch).


Detail View

URL: /admin/appeals/parcelimportjob/<id>/change/

All fields are read-only. No field can be edited through the admin.

Fields

FieldDescription
ActionJob type
StatusLifecycle state
BatchLink to the associated ParcelImportBatch, if any
Created ByThe staff user who triggered the job. Blank for jobs queued by the worker itself (e.g., blob imports creating their own batch mid-run)
PayloadJSON object with action-specific parameters — see payload details below
Progress Current / TotalRow-level progress counters
Progress MessageLatest status message
Last ErrorException message if the job failed. Empty on success
Started AtTimestamp when the worker claimed the job
Finished AtTimestamp when the job completed or failed
Created At / Updated AtRecord timestamps

Payload Contents by Action

ActionPayload Fields
VALIDATETypically empty — batch reference is stored in the batch FK
APPLYTypically empty — batch reference is stored in the batch FK
BLOB_IMPORTblob_url, tax_year, profile_name
DUMP_PARCELSPopulated after completion: deleted_parcels, deleted_batches, deleted_raw_rows
PURGE_RAW_ROWSimport_batch_ids (list), requested_rows, dedupe_key; populated after completion: deleted_raw_rows

Job Deduplication

The queue system prevents redundant jobs from piling up. When a job is queued via queue_parcel_job():

  • For batch-linked jobs (VALIDATE, APPLY): if a job with the same action and batch is already QUEUED or RUNNING, the existing job is returned and no new record is created.
  • For non-batch jobs (DUMP_PARCELS, PURGE_RAW_ROWS): deduplication is by dedupe_key in the payload. If a job with the same action, no batch, and matching dedupe_key is already active, the existing job is returned.

This means clicking Queue Validation twice for the same batch will not create two jobs — the second click is silently a no-op.


The Worker Process

Jobs do not execute automatically — they require the parcel_import_worker management command to be running as a long-lived daemon process.

python manage.py parcel_import_worker

How the worker operates:

  1. Polls for QUEUED jobs every 5 seconds (default; configurable with --sleep N).
  2. Claims the oldest queued job using SELECT FOR UPDATE SKIP LOCKED to prevent concurrent workers from claiming the same job.
  3. Transitions the job to RUNNING and records started_at.
  4. Executes the action.
  5. On success: sets status to SUCCESS, progress_message = Completed, records finished_at.
  6. On exception: sets status to FAILED, writes the exception to last_error, records finished_at.

Progress updates are written to the job record every 250 rows or every 2 seconds (throttled by the ProgressTracker in the pipeline service). Refreshing the list view or detail view shows the latest progress.

Single-run mode (useful for testing or one-off manual execution):

python manage.py parcel_import_worker --once

What to Check When a Job Fails

  1. Open the failed job's detail view.
  2. Read Last Error — this is the raw exception message from the worker.
  3. Common failure causes:
ErrorCause
Profile '<name>' not foundThe schema profile named in the blob import payload no longer exists or was renamed
Azure credentials are not configuredAZURE_STORAGE_CONNECTION_STRING (or AZURE_STORAGE_ACCOUNT_NAME + AZURE_STORAGE_KEY) env vars are missing
Blob URL must include container and blob pathThe blob URL submitted was malformed
Parcel dump blocked while N appeal(s) existDump cannot run while any Appeal records exist — delete appeals first
Validate job requires a batch / Apply job requires a batchThe associated ParcelImportBatch was deleted before the job ran
Raw row purge job requires at least one import batch idThe PURGE_RAW_ROWS payload had no valid batch IDs
  1. To retry after fixing the root cause, re-queue a new job from the batch detail page or Data Tools — do not attempt to edit the failed job record.

Relationship to Other Pages

PageRelationship
Validate ImportationsBatch detail page shows the latest job's progress; Queue Validation and Queue Apply buttons create VALIDATE / APPLY jobs
Data ToolsShows last 10 jobs in the Parcel Background Jobs table; file upload and blob URL forms create APPLY / BLOB_IMPORT jobs; Dump Parcel Records creates a DUMP_PARCELS job
Parcel Row Import ManagerQueue background purge action creates PURGE_RAW_ROWS jobs

Key Code Locations

PathDescription
appeals/models.py:913ParcelImportJob model — action/status choices, payload, progress fields
appeals/admin.py:557ParcelImportJobAdmin — read-only list and detail view
appeals/services/parcel_import_jobs.pyqueue_parcel_job() — deduplication logic and job creation
appeals/management/commands/parcel_import_worker.pyWorker daemon — _claim_next_job(), _run_job(), all action handlers
appeals/services/parcel_import_pipeline.pyvalidate_batch(), apply_batch() — called by the worker

Parcel Row Import Manager (/admin/appeals/parcelrawrow/)

What Is This Page?

The Parcel Row Import Manager is the admin section for inspecting individual ParcelRawRow records — one record per CSV row per import batch. Every row ingested from a parcel CSV is stored here in its original and normalized forms, along with any validation errors produced during the validate phase.

This page is primarily a diagnostic tool. When an import has invalid rows, this is where you come to see exactly what was in the source data, what the pipeline produced after applying transforms, and which specific validation errors caused a row to be rejected.

Access: Visible to superusers only. Regular staff users cannot see this section in the navigation. This is enforced by get_model_perms() in ParcelRawRowAdmin.


List View

URL: /admin/appeals/parcelrawrow/

Rows are ordered by row_number (ascending) within each batch.

Columns

ColumnDescription
Import BatchThe ParcelImportBatch this row belongs to (links to the batch)
Row NumberThe 1-based row position in the original CSV file
Normalized KeyThe account value extracted from the row (populated during ingest from the raw account-mapped column if identifiable, otherwise blank). Indexed for fast lookup.
ValidBoolean — whether the row passed validation. True rows are applied to Parcel records; False rows are not.

Filters

  • Valid — show only valid or only invalid rows
  • Import Batch — filter to all rows for a specific batch

Search

Searches by normalized_key (account value).

Action: Queue Background Purge

Selecting rows and running Queue background purge for selected import batch(es) queues a PURGE_RAW_ROWS job for the import batches represented in the selection.

Important behavior:

  • The purge operates at the batch level, not the row level. Selecting any row from a batch targets all raw rows for that entire batch — not just the selected rows.
  • Deduplication prevents a second purge job from being queued if one is already QUEUED or RUNNING for the same batch set.
  • If a matching job already exists, the action reports the existing job ID and takes no further action.
  • The default Django Delete selected action is removed — raw rows cannot be deleted individually through the admin. Deletion must go through the background purge job.

Detail View

URL: /admin/appeals/parcelrawrow/<id>/change/

All fields are read-only. No field can be edited.

Fields

FieldDescription
Import BatchThe ParcelImportBatch this row came from
Row Number1-based position in the source CSV
Normalized KeyAccount value — the primary lookup key for this row
ValidWhether the row passed validation
RawThe original CSV row stored as a JSON object. Keys are the exact column headers from the CSV file; values are the unmodified strings from the source. This is never altered after ingest.
NormalizedThe validated and transformed row as a JSON object. Keys are canonical field names (e.g., account, res_assessed_value); values are the parsed results after applying the schema profile's transforms. Populated during the validate phase. null before validation.
ErrorsA JSON list of error strings describing why the row failed validation. Empty list ([]) for valid rows. null before validation.

How Raw Rows Are Created

During the ingest phase of the pipeline:

  1. The CSV is read line by line.
  2. Each row is stored as a ParcelRawRow with raw = the original CSV row dict and valid = False.
  3. Rows with a blank account field are skipped entirely — they produce no ParcelRawRow record.
  4. Rows are bulk-created in batches for efficiency.

At this point normalized, errors, and normalized_key are all unpopulated.


How Rows Are Validated

During the validate phase:

  1. The pipeline loads the schema profile's field maps.
  2. For each raw row, it applies the transform for each mapped column.
  3. The result is stored in normalized as a canonical-field-keyed JSON object.
  4. Any field marked required that is missing or unparseable is recorded in errors.
  5. valid is set to True if errors is empty, False otherwise.
  6. normalized_key is set to the account value extracted from the normalized output.

Reading the Raw and Normalized Fields

raw example

This is exactly what was in the CSV:

{
  "Account": "  1234-5678  ",
  "Owner": "SMITH JOHN",
  "SiteAddress": "123 MAIN ST",
  "ResidentialAssessedValue": "142,500.00",
  "TotalMarketValue": "189,000.00"
}

normalized example

After applying the schema profile transforms (strip, strip, strip, decimal, decimal):

{
  "account": "1234-5678",
  "owner_name": "SMITH JOHN",
  "situs_address": "123 MAIN ST",
  "res_assessed_value": "142500.00",
  "assessor_total_market_value": "189000.00"
}

errors example for an invalid row

[
  "Required field 'account' is missing or blank."
]

or for a transform failure:

[
  "Field 'res_assessed_value': cannot convert 'N/A' to decimal."
]

Raw Row Retention

Raw rows are retained indefinitely after import — they are not automatically deleted when a batch is applied. This is intentional: they serve as the permanent audit record of exactly what source data was imported.

To free database space after a batch has been applied and verified, use the Queue background purge action on this page, or the same action available from the batch detail page in Validate Importations. The PURGE_RAW_ROWS job deletes all ParcelRawRow records for the targeted batches without affecting the Parcel records that were created from them.


Relationship to Other Pages

PageRelationship
Validate ImportationsBatch detail shows Top Errors and Invalid Rows (sample) derived from raw rows; each batch links back to its raw rows
Property Account SearchEach Parcel record has a raw_last_seen FK pointing to the most recent ParcelRawRow that populated it
Parcel Import JobsPURGE_RAW_ROWS jobs created here are visible and monitored on the Import Jobs page

Key Code Locations

PathDescription
appeals/models.py:1085ParcelRawRow model — all fields and indexes
appeals/admin.py:600ParcelRawRowAdmin — read-only list/detail, purge action, superuser gate, delete removal
appeals/services/parcel_import_pipeline.pyingest_file() creates raw rows; validate_batch() populates normalized, errors, valid, normalized_key
appeals/services/parcel_import_jobs.pyqueue_parcel_job() — used by the purge action to create the background job
appeals/management/commands/parcel_import_worker.py_run_purge_raw_rows() — executes the chunked deletion

Validate Importations (/admin/appeals/parcelimportbatch/)

What Is This Page?

Validate Importations is the admin section for managing ParcelImportBatch records. A batch represents one uploaded CSV file — it tracks the file itself, which tax year and schema profile it belongs to, its current pipeline status, and all row counts and errors produced during validation.

This page is the step-by-step path for importing parcels when uploading a file directly through the admin (as opposed to using the Azure Blob URL method on Data Tools). It is also where you monitor validation results and apply batches after reviewing them.

Access: Available to all staff users with standard Django admin permissions on the ParcelImportBatch model.


List View

URL: /admin/appeals/parcelimportbatch/

Batches are ordered newest first (-uploaded_at).

Columns

ColumnDescription
Original FilenameThe name of the uploaded CSV file
Tax YearThe assessment year this batch was imported for
StatusCurrent pipeline state — see statuses below
Row Count TotalTotal rows ingested (excludes blank-account rows)
Row Count ValidRows that passed validation
Row Count InvalidRows that failed validation
Schema ProfileThe ParcelSchemaProfile used for this batch
Uploaded AtWhen the batch was created

Filters

  • Tax Year — filter to a specific assessment year
  • Status — filter by pipeline state
  • Schema Profile — filter by which profile was used

Search

Searches by original filename and file hash (SHA256).

Bulk Actions

ActionDescription
Validate selected batches (background)Queues a VALIDATE job for each selected batch. Skips batches that already have a QUEUED or RUNNING validate job.
Apply selected batches (background)Queues an APPLY job for each selected batch. The apply worker auto-validates first if the batch is not yet in VALIDATED status. Skips batches with an active job.

Batch Statuses

StatusMeaning
UPLOADEDFile has been ingested and raw rows stored. Validation has not run yet.
VALIDATEDValidate phase complete. Row counts and error details are populated. Ready to apply.
APPLYINGApply phase is currently running.
APPLIEDAll valid rows have been upserted to Parcel records.
FAILEDA pipeline error occurred. error_details contains the failure message.
DUPLICATEUpload was rejected because a batch with the same (tax_year, file_hash_sha256) already exists.

Adding a New Batch (Upload)

URL: /admin/appeals/parcelimportbatch/add/

This is how you upload a parcel CSV directly through the admin. The upload form has the following fields:

FieldRequiredDescription
Tax YearYesThe assessment year for this import. Combined with the file hash to enforce deduplication.
Schema ProfileNoThe ParcelSchemaProfile to use for validation. Required in practice — without one the validate phase cannot map any columns.
Source NameNoFree-text label for the data source (e.g., County Assessor Export 2026). Informational only.
Uploaded FileYesThe CSV file to import. Must be non-empty.
NotesNoFree-text notes about this batch.

On save:

  • The file's SHA256 hash is computed from the uploaded bytes.
  • If a batch with the same (tax_year, file_hash_sha256) already exists, the form raises a validation error and the upload is rejected. The error message includes the ID of the existing duplicate batch.
  • If accepted, the batch is saved with status UPLOADED and the file is stored under media/parcel_imports/<year>/.
  • The uploaded_by field is set to the current user automatically.
  • Raw rows are not created at this point — ingest runs when the first job (validate or apply) processes the batch.

After saving, proceed to the batch detail page to queue background jobs.


Batch Detail View

URL: /admin/appeals/parcelimportbatch/<id>/change/

Editable Fields (on existing batches)

Once a batch is saved, only schema_profile, source_name, and uploaded_file remain editable through the form. All status, count, and diagnostic fields are read-only.

Read-Only Fields

FieldDescription
Uploaded AtTimestamp of upload
Uploaded ByStaff user who created the batch
File Hash SHA256SHA256 of the uploaded file — used for deduplication
Row Count TotalRows stored as ParcelRawRow records (populated after ingest)
Row Count Skipped Blank AccountRows skipped during ingest because the account field was blank
Row Count ValidRows that passed validation (populated after validate phase)
Row Count InvalidRows that failed validation (populated after validate phase)
StatusCurrent pipeline state
Error DetailsPipeline-level error message if status is FAILED
NotesNotes entered at upload time

Background Actions (inline buttons)

Two buttons appear in the Background Actions read-only field:

  • Queue Validation — queues a VALIDATE job for this batch. If a validate or apply job is already QUEUED or RUNNING for this batch, reports the existing job ID and does nothing.
  • Queue Apply — queues an APPLY job for this batch. The apply worker will auto-validate if needed before applying.

Both buttons redirect back to the batch detail page after queuing.

Latest Job Progress

Displays the most recently created job for this batch:

<Action> — <Status> | <current>/<total> (<percent>%) | <message>

Examples:

  • Validate — Running | 1500/8000 (18%) | Validating
  • Apply — Success | 8000/8000 (100%) | Completed
  • Apply — Failed | 450/8000 (5%) | Failed

This field does not auto-refresh — reload the page to see updated progress.

Top Errors (sample)

Aggregates error messages from the first 200 invalid ParcelRawRow records, counts occurrences of each distinct message, and shows the top 20 most frequent errors in descending order.

Format: <count> × <error message>

Example:

142 × Required field 'account' is missing or blank.
37 × Field 'res_assessed_value': cannot convert 'N/A' to decimal.
12 × Field 'sale_date_1': unrecognized date format '00/00/0000'.

Use this to quickly identify systematic problems — a transform misconfiguration, a missing column mapping, or a data quality issue in the source file.

Invalid Rows (sample)

Shows the first 20 invalid rows with their row number and error list:

Row 14: ["Field 'res_assessed_value': cannot convert 'N/A' to decimal."]
Row 23: ["Required field 'account' is missing or blank."]

For deeper inspection of individual rows (including the full raw and normalized JSON), navigate to Parcel Row Import Manager and filter by this batch.


Typical Workflow

  1. Upload — Add a new batch via the Add page. Select tax year, schema profile, and file. Save.
  2. Queue Validation — On the batch detail page, click Queue Validation. The parcel_import_worker will pick this up and run the validate phase.
  3. Review — Reload the page to check Latest Job Progress, Row Count Valid / Invalid, Top Errors, and Invalid Rows (sample).
  4. Decide — If invalid rows are acceptable (e.g., a known subset of unmappable records), proceed to apply. If errors indicate a schema or data problem, fix the profile or re-export the CSV and upload again.
  5. Queue Apply — Click Queue Apply. The worker upserts all valid rows to Parcel records.
  6. Confirm — Verify in Property Account Search that parcel records exist and look correct.

Deduplication

A batch is rejected at upload time if a ParcelImportBatch record already exists with the same (tax_year, file_hash_sha256). The error message identifies the conflicting batch by ID.

To import the same file again legitimately:

  • Use the CLI with --force to bypass the hash check.
  • Or modify the file (even a single whitespace change will produce a different hash).
  • Or dump all parcel records first (if starting fresh).

Re-applying an already-applied batch is idempotent — the same valid rows simply update existing Parcel records in place.


Relationship to Other Pages

PageRelationship
Data ToolsAlternative upload path (file upload and blob URL); also shows last 10 jobs in the Parcel Background Jobs table
Parcel Import JobsEvery Queue Validation / Queue Apply action creates a job visible here; full job history and error details
Parcel Row Import ManagerAll ParcelRawRow records for a batch; full raw / normalized / errors inspection
Property Account SearchParcel records created by applying this batch; each Parcel links back to its current_import_batch
Parcel Column Import ManagerSchema profiles available for selection on this page

Key Code Locations

PathDescription
appeals/models.py:867ParcelImportStatus choices
appeals/models.py:876ParcelImportBatch model — all fields, unique constraint, ordering
appeals/admin.py:316ParcelImportBatchAdmin — upload form, list display, read-only fields, queue buttons, bulk actions, error summary, invalid sample
appeals/admin.py:384save_model() — auto-sets uploaded_by
appeals/admin.py:390queue_actions() — Queue Validation / Queue Apply inline buttons
appeals/admin.py:404job_progress() — Latest Job Progress display
appeals/admin.py:524error_summary() — Top Errors aggregation logic
appeals/admin.py:544invalid_sample() — first 20 invalid rows
appeals/services/parcel_import_jobs.pyqueue_parcel_job() — deduplication and job creation
appeals/services/parcel_import_pipeline.pyingest_file(), validate_batch(), apply_batch()

Login, User Management, and Permissions

Login Entry Points

Pie has two distinct login paths depending on the user's role.

Selector Page (/login/)

The root login selector at /login/ presents two buttons:

  • Local Login/accounts/login/?next=/portal/ — for portal (non-admin) users authenticating with a username and password.
  • Admin Login/admin/login/?next=/admin/ — for staff and superusers accessing the Django admin.

Already-authenticated users are automatically redirected to their appropriate destination (/admin/ or /portal/) without seeing the selector.


Local / Portal Login (/accounts/login/)

Used by portal-only users. Renders the allauth local login form with:

  • Username and password fields
  • Local User Sign In button
  • Continue with Microsoft button (only shown if Microsoft SSO is configured)
  • Forgot password link

After successful login, RoleAwareAccountAdapter.get_login_redirect_url() sends admin-capable users to /admin/ and all others to /portal/.


Admin Login (/admin/login/)

Used by staff and superusers. Renders the Django admin login form with:

  • Username and password fields
  • Remember Me checkbox
  • Admin User Sign In button
  • Continue with Microsoft button (only shown if Microsoft SSO is configured)

Remember Me behavior:

  • Checked → session persists for SESSION_COOKIE_AGE (default Django value: 2 weeks)
  • Unchecked → session expires when the browser is closed

If an authenticated user who does not meet admin access requirements hits this URL, they are redirected to /portal/.


Microsoft SSO (/accounts/microsoft/login/)

Both login forms show a Continue with Microsoft button when Microsoft SSO is configured. Clicking it initiates an OAuth2 flow via the allauth Microsoft social provider.

How it works:

  1. The user is redirected to Microsoft's login page.
  2. Microsoft authenticates the user and returns an email address.
  3. Pie's MicrosoftSocialAccountAdapter intercepts the callback and checks:
    • The email returned by Microsoft is non-empty.
    • A Django user with that email already exists in Pie's database.
    • That user's account is active (is_active = True).
  4. If all checks pass, the Microsoft account is linked to the existing Django user and the login proceeds.
  5. If any check fails, the user is redirected back to /login/ with an error message.

Microsoft SSO does not create new users. The Django user account must exist first. Self-signup is disabled (is_open_for_signup returns False).

The next parameter controls post-login destination:

  • Portal-bound: /accounts/microsoft/login/?process=login&next=%2Fportal%2F
  • Admin-bound: /accounts/microsoft/login/?process=login&next=%2Fadmin%2F

Setting Up Microsoft SSO for a User

Step 1 — Verify environment variables are set

Microsoft SSO is only active when both of the following environment variables are set on the server (Azure Web App Application Settings):

VariableDescription
MICROSOFT_CLIENT_IDAzure AD app registration Client ID
MICROSOFT_CLIENT_SECRETAzure AD app registration Client Secret
MICROSOFT_TENANTAzure AD tenant ID or common (default: common)

The Continue with Microsoft button will not appear on login pages unless both MICROSOFT_CLIENT_ID and MICROSOFT_CLIENT_SECRET are set.

Step 2 — Create the Django user account first

A Pie user account must exist before Microsoft SSO can be used. Create the user in the Django admin:

  1. Go to Django admin → Authentication and Authorization → Users → Add User.
  2. Set a username (can be anything — the email address is recommended for clarity).
  3. Set a temporary password (the user will never need to use it if they always log in via Microsoft).
  4. Click Save and continue editing.
  5. Fill in the user's Email address — this must exactly match the email in their Microsoft account (case-insensitive).
  6. Set Active = checked.
  7. Assign the appropriate staff status and group (see Roles and Permissions below).
  8. Save.

Step 3 — Link the Microsoft account (automatic on first sign-in)

The user's Microsoft account is linked to their Pie account automatically on their first successful Microsoft SSO login. No manual linking step is required — MicrosoftSocialAccountAdapter.pre_social_login() calls sociallogin.connect(request, user) on every login, which creates or updates the SocialAccount link.

Step 4 — Verify the link (optional)

Linked social accounts can be inspected in the Django admin under Social Accounts → Social accounts. Each linked account shows the provider (microsoft), the associated Django user, and the last login timestamp.


User Roles and Permissions

Pie uses three distinct access tiers, controlled by Django's built-in is_superuser, is_staff, and groups flags.

Superuser

  • is_superuser = True
  • Full, unrestricted access to all of Django admin — all models, all actions.
  • Bypasses all group checks.
  • Always passes user_can_access_admin().
  • Can access all admin sections hidden from regular staff (e.g., Parcel Column Import Manager, Parcel Row Import Manager, certain raw import pipeline models).

When to use: System administrators and developers only.

Staff (Admin Portal User)

  • is_staff = True AND member of the AdminPortal group
  • Can access Django admin (/admin/).
  • Subject to Django's standard object-level permissions — only sees and can edit models their permissions allow.
  • Cannot access superuser-only sections.

When to use: Board of equalization staff who use the admin to manage appeals, hearings, meetings, and documents.

Access rule: Both conditions must be true. is_staff = True alone is not sufficient — the user must also be in the AdminPortal group. The group name is configured via the ADMIN_PORTAL_GROUP_NAME environment variable (defaults to AdminPortal).

Portal User

  • is_staff = False, is_superuser = False
  • No access to Django admin.
  • Can log in at /accounts/login/ and access /portal/ only.
  • Sees appeal status and related information for their own appeals.

When to use: Property owners or representatives who filed an appeal and need read-only portal access.


Creating and Managing Users

Create a new user

  1. Django admin → Authentication and Authorization → Users → Add User.
  2. Enter username and password. Click Save and continue editing.
  3. Fill in:
    • Email — required for Microsoft SSO; must match their Microsoft account email exactly.
    • First name / Last name — optional but recommended.
    • Active — must be checked for the user to be able to log in.
    • Staff status — check this for admin users.
    • Superuser status — check only for system administrators.
  4. Under Groups, add the user to AdminPortal for admin access.
  5. Under User permissions, add individual Django model permissions if needed (usually handled through groups instead).
  6. Save.

Deactivate a user

Set Active = unchecked. The user cannot log in via password or Microsoft SSO. Their data and audit history are preserved.

Reset a password

In the user's admin edit page, use the Change password link. The self-service password reset at /accounts/password/reset/ is currently out of service — staff must reset passwords manually through the admin.

Delete a user

Deleting a user is permanent and will break any foreign key references to that user. Deactivating is preferred.


Managing Groups and Permissions

Groups in Django are collections of permissions assigned to users collectively. In Pie, the AdminPortal group is the primary mechanism for granting admin access to staff users.

Create or edit the AdminPortal group

  1. Django admin → Authentication and Authorization → Groups.
  2. Click AdminPortal (or Add Group if it does not exist yet).
  3. Set the group Name to AdminPortal (must match the ADMIN_PORTAL_GROUP_NAME setting exactly).
  4. Under Permissions, add the Django model permissions the group's members should have (e.g., appeals | appeal | Can add appeal, appeals | appeal | Can change appeal, etc.).
  5. Save.

Add a user to the AdminPortal group

On the user's edit page, scroll to Groups and move AdminPortal to the Chosen groups list. Save.

Or from the Group page, users can be viewed but not assigned directly — use the User edit page for group assignment.

Additional groups

You can create additional groups for more granular permission segmentation (e.g., a read-only staff group). Assign those groups to users in the same way. Pie does not enforce any specific secondary group names beyond AdminPortal for admin gate access.


Access Policy Summary

User stateCan access /admin/Can access /portal/
is_superuser = TrueYesYes (redirected to /admin/)
is_staff = True + AdminPortal groupYesYes (redirected to /admin/)
is_staff = True, not in AdminPortal groupNoYes
is_staff = False, is_superuser = FalseNoYes
is_active = FalseNoNo

The admin access check is enforced by user_can_access_admin() in appeals/auth_roles.py, which patches admin.site.has_permission globally via enforce_admin_site_access_policy().


Audit Logging

All login activity is recorded automatically:

  • Successful login — logged by audit_user_login() in appeals/signals.py. Records user ID, username, email, IP address, path, and group membership at login time.
  • Failed login attempt — logged by audit_user_login_failed(). Records the attempted username and IP. Passwords are never captured.
  • Logout — logged by audit_user_logout().

Audit records are viewable in Django admin → Audit Logs, or exportable via Data Tools → Audit Logs Export.


Key Code Locations

PathDescription
appeals/auth_roles.pyuser_can_access_admin(), enforce_admin_site_access_policy(), role-aware redirect logic
appeals/adapters.pyMicrosoftSocialAccountAdapter — blocks unknown/inactive users from Microsoft SSO
appeals/account_adapters.pyRoleAwareAccountAdapter — post-login redirect by role, safe URL enforcement
appeals/views.pyadmin_login_view(), user_login_view(), portal_home(), _microsoft_sso_configured()
appeals/signals.pyaudit_user_login(), audit_user_login_failed(), audit_user_logout()
appealsys/settings.pyAUTHENTICATION_BACKENDS, ACCOUNT_ADAPTER, SOCIALACCOUNT_ADAPTER, ADMIN_PORTAL_GROUP_NAME, Microsoft SSO env vars
appealsys/urls.pyURL wiring for /login/, /admin/login/, /accounts/, /portal/