Auxillary Management
Schema, data, and access management
Use this page for prerequisite schema setup, administrative data operations, and user account governance.
Parcel Schema Profiles
This section is a required prerequisite for Loading Property Account Information.
What Is a ParcelSchemaProfile?
A ParcelSchemaProfile is a named, reusable translation table that tells Pie how to read a vendor's CSV export. Because different county assessors and GIS platforms (such as Esri/ArcGIS) use their own column naming conventions, Pie does not assume a fixed column layout. Instead, every import requires a profile that explicitly maps each CSV column header to one of Pie's canonical internal fields.
Without a profile, Pie cannot interpret any parcel data - the import pipeline will refuse to proceed.
Why This Matters for Esri / ArcGIS Exports
Property assessment data exported from Esri/ArcGIS (or any GIS-based assessor platform) typically has column names that reflect the platform's own data model rather than any universal standard. For example, a situs address column might be called SiteAddress, SitusAddr, PropertyAddress, or something else entirely depending on how the jurisdiction has configured their system.
A ParcelSchemaProfile bridges this gap:
- It records the exact column names present in the vendor export.
- It specifies how each value should be parsed or normalized (for example, trimming whitespace, parsing a decimal currency field, converting a date from epoch milliseconds).
- It marks which columns are required for a row to be considered valid.
Profiles are reusable across tax years - once configured for a given assessor extract layout, the same profile can be used for every annual import as long as the column structure does not change.
Creating a Profile
Step 1 - Open the admin page
Go to Django admin -> Parcel Column Import Manager.
This section is visible to superusers only.
Step 2 - Add a new profile
Click Add Parcel Schema Profile and fill in:
| Field | Description |
|---|---|
| Name | A short, unique identifier for the profile (for example, esri-default). Used to select the profile during import. |
| Vendor | The data source platform. Currently the only choice is Esri / ArcGIS. |
| Description | Optional free-text note describing the data source or county. |
| Active | Must be checked for the profile to appear in import dropdowns. |
Step 3 - Add column mappings
Under Column Mappings, add one row per CSV column you want to import. Each row has:
| Field | Description |
|---|---|
| Source Field | The exact column header as it appears in the CSV. Case and spacing must match precisely. |
| Canonical Field | The Pie field this column maps to (choose from the dropdown - see the full list below). |
| Transform | How the raw string value should be parsed before storing (see transforms below). |
| Required | If checked, rows without this field will be marked invalid. Check this for account at minimum. |
| Friendly Label | Display-only label shown in the admin UI. Does not affect import logic. |
Tip: After saving, use the Auto-fill Friendly Labels action to populate any blank friendly labels automatically from the canonical field name.
Step 4 - Save
Save the profile. It is immediately available for use in any import method (UI or CLI).
Importing and Exporting Profiles via JSON
Profiles can be exported to a JSON file and re-imported on another environment (for example, from local to production, or to back up a configuration). This avoids manual re-entry.
Export
- Go to Data Tools.
- Under the schema profile export section, select the profile and click Export.
- A
.jsonfile is downloaded in thepie.parcel-schema-profileformat.
Import
- Go to Data Tools.
- Under the schema profile import section, upload the
.jsonfile. - Pie validates the file and upserts the profile by name:
- If the profile name does not exist, it is created.
- If it already exists, its vendor/description/active fields are updated and all existing column mappings are replaced with the imported ones.
Warning: Importing a profile with an existing name will delete and replace all of its current column mappings. This is intentional - the import is designed to be idempotent.
The esri-default Profile (Reference)
The esri-default profile is the standard mapping for Esri/ArcGIS property assessment exports. Its column mappings are:
| CSV Column (Source Field) | Canonical Field | Transform |
|---|---|---|
Account | account | strip - Required |
AgriculturalAssessedValue | ag_assessed_value | decimal |
AgriculturalImprovementValue | ag_improvement_value | decimal |
AgriculturalLandValue | ag_land_value | decimal |
CommercialAssessedValue | com_assessed_value | decimal |
CommercialImprovementValue | com_improvement_value | decimal |
CommercialLandValue | com_land_value | decimal |
MailingAddress | mailing_address | strip |
Municipality | municipality | strip |
Owner | owner_name | strip |
Parcel_ID | parcel_id | strip |
ParcelReport | parcel_report_url | strip |
PrevOwner1 | prev_owner_1 | strip |
ResidentialAssessedValue | res_assessed_value | decimal |
ResidentialImprovementValue | res_improvement_value | decimal |
ResidentialLandValue | res_land_value | decimal |
SalePrice1 | sale_price_1 | decimal |
SiteAddress | situs_address | strip |
SitusZip | situs_zip | strip |
TotalMarketValue | assessor_total_market_value | decimal |
Note that assessor_total_assessed_value is not imported directly - it is auto-computed by the pipeline from the sum of residential, commercial, and agricultural assessed values if not explicitly provided.
The JSON representation of this profile (suitable for import) follows the pie.parcel-schema-profile format:
{
"format": "pie.parcel-schema-profile",
"format_version": 1,
"profile": {
"name": "esri-default",
"vendor": "esri",
"description": "Default ESRI mapping",
"active": true,
"field_maps": [
{ "source_field": "Account", "canonical_field": "account", "transform": "strip", "required": true, "friendly_label": "Account" },
{ "source_field": "AgriculturalAssessedValue", "canonical_field": "ag_assessed_value", "transform": "decimal", "required": false, "friendly_label": "Ag Assessed Value" },
{ "source_field": "AgriculturalImprovementValue", "canonical_field": "ag_improvement_value", "transform": "decimal", "required": false, "friendly_label": "Ag Improvement Value" },
{ "source_field": "AgriculturalLandValue", "canonical_field": "ag_land_value", "transform": "decimal", "required": false, "friendly_label": "Ag Land Value" },
{ "source_field": "CommercialAssessedValue", "canonical_field": "com_assessed_value", "transform": "decimal", "required": false, "friendly_label": "Com Assessed Value" },
{ "source_field": "CommercialImprovementValue", "canonical_field": "com_improvement_value", "transform": "decimal", "required": false, "friendly_label": "Com Improvement Value" },
{ "source_field": "CommercialLandValue", "canonical_field": "com_land_value", "transform": "decimal", "required": false, "friendly_label": "Com Land Value" },
{ "source_field": "MailingAddress", "canonical_field": "mailing_address", "transform": "strip", "required": false, "friendly_label": "Mailing Address" },
{ "source_field": "Municipality", "canonical_field": "municipality", "transform": "strip", "required": false, "friendly_label": "Municipality" },
{ "source_field": "Owner", "canonical_field": "owner_name", "transform": "strip", "required": false, "friendly_label": "Owner Name" },
{ "source_field": "Parcel_ID", "canonical_field": "parcel_id", "transform": "strip", "required": false, "friendly_label": "Parcel Id" },
{ "source_field": "ParcelReport", "canonical_field": "parcel_report_url", "transform": "strip", "required": false, "friendly_label": "Parcel Report Url" },
{ "source_field": "PrevOwner1", "canonical_field": "prev_owner_1", "transform": "strip", "required": false, "friendly_label": "Prev Owner 1" },
{ "source_field": "ResidentialAssessedValue", "canonical_field": "res_assessed_value", "transform": "decimal", "required": false, "friendly_label": "Res Assessed Value" },
{ "source_field": "ResidentialImprovementValue", "canonical_field": "res_improvement_value", "transform": "decimal", "required": false, "friendly_label": "Res Improvement Value" },
{ "source_field": "ResidentialLandValue", "canonical_field": "res_land_value", "transform": "decimal", "required": false, "friendly_label": "Res Land Value" },
{ "source_field": "SalePrice1", "canonical_field": "sale_price_1", "transform": "decimal", "required": false, "friendly_label": "Sale Price 1" },
{ "source_field": "SiteAddress", "canonical_field": "situs_address", "transform": "strip", "required": false, "friendly_label": "Situs Address" },
{ "source_field": "SitusZip", "canonical_field": "situs_zip", "transform": "strip", "required": false, "friendly_label": "Situs Zip" },
{ "source_field": "TotalMarketValue", "canonical_field": "assessor_total_market_value", "transform": "decimal", "required": false, "friendly_label": "Assessor Total Market Value" }
]
},
"compatibility": {
"dropped_legacy_mappings": []
}
}
Canonical Fields Reference
These are all the fields Pie can store on a Parcel record. Every column mapping must point to one of these.
| Canonical Field | Description | Typical Transform |
|---|---|---|
account | Account number - required | strip |
parcel_id | Parcel / APN identifier | strip |
owner_name | Property owner name | strip |
situs_address | Property situs (physical) address | strip |
mailing_address | Owner mailing address (full string) | strip |
municipality | Municipality / taxing jurisdiction | strip |
situs_zip | Situs ZIP code | strip |
assessor_total_market_value | Total assessor market value | decimal |
assessor_total_assessed_value | Total assessor assessed value (auto-computed if absent) | decimal |
res_assessed_value | Residential assessed value | decimal |
res_improvement_value | Residential improvement value | decimal |
res_land_value | Residential land value | decimal |
com_assessed_value | Commercial assessed value | decimal |
com_improvement_value | Commercial improvement value | decimal |
com_land_value | Commercial land value | decimal |
ag_assessed_value | Agricultural assessed value | decimal |
ag_improvement_value | Agricultural improvement value | decimal |
ag_land_value | Agricultural land value | decimal |
prev_owner_1 | Previous owner name | strip |
sale_price_1 | Most recent sale price | decimal |
sale_date_1 | Most recent sale date | date |
parcel_report_url | URL to external assessor parcel report | strip |
assessor_total_assessed_valueis automatically computed asres_assessed_value + com_assessed_value + ag_assessed_valueif it is not mapped or the mapped value is blank.
Transforms Reference
| Transform | Behavior | Use For |
|---|---|---|
identity | Raw value, no change | Any field where no normalization is needed |
strip | Strip leading/trailing whitespace | Text fields - addresses, names, IDs |
upper | Convert to uppercase | Account numbers or codes requiring uniform casing |
lower | Convert to lowercase | |
decimal | Parse as a decimal number; handles commas (for example, 1,234.56) | All currency / value fields |
int | Parse as an integer | Integer counts |
date | Parse common date formats: YYYY-MM-DD, MM/DD/YYYY, MM/DD/YY, YYYY/MM/DD; strips Esri timezone suffixes (for example, 2026-01-15T00:00:00.000Z) | Date fields from most assessor exports |
date_epoch_ms | Convert epoch milliseconds (integer) to a date | Date fields from Esri feature service JSON exports |
Tips for Working With Esri / ArcGIS Exports
Open the CSV in a text editor first. Confirm the exact header row. Esri exports sometimes include non-breaking spaces (
\u00a0) in column names - Pie strips these automatically during ingest, but confirming headers upfront prevents confusion.Match source_field exactly. Source field matching is case-sensitive and whitespace-sensitive. If the CSV header is
Parcel_ID(with underscore), the source_field must beParcel_ID, notParcelIDorparcel_id.Use
decimalfor all currency/value columns. Esri exports often format numbers with commas (1,234,567.00). Thedecimaltransform handles this;identitywill not parse it as a number.Date columns from feature services use epoch ms. If you are exporting directly from an Esri feature service (REST API) rather than a file geodatabase export, date values may be integer epoch milliseconds (for example,
1737936000000). Usedate_epoch_msfor those columns. Standard file exports typically useYYYY-MM-DDstrings; usedatefor those.Only map the columns you have. You do not need to map every canonical field. Unmapped canonical fields will simply be blank on the resulting
Parcelrecord. Map only what your assessor extract actually provides.One profile per extract layout. If different counties use different column names, create a separate profile for each. Profiles are cheap to create and the name makes it clear which profile to use for which source.
Re-use across tax years. A profile does not encode a tax year. The same
esri-defaultprofile works for 2025, 2026, and beyond as long as the column structure has not changed.
Key Code Locations
| Path | Description |
|---|---|
| appeals/models.py | ParcelSchemaProfile, ParcelFieldMap, CANONICAL_FIELDS, ParcelTransform |
| appeals/admin.py | ParcelSchemaProfileAdmin, ParcelFieldMapInline |
| appeals/views.py | export_parcel_schema_profile(), import_parcel_schema_profile(), _parcel_schema_export_payload() |
| appeals/services/parcel_import_pipeline.py | _load_field_map(), _transform_value() - where the profile is consumed during import |
Data Tools (/admin/data-tools/)
What Is Data Tools?
Data Tools is the central admin page for bulk data operations in Pie. It provides export and import controls for every major dataset, parcel-specific import tooling, schema profile management, and a full-instance migration system. All actions on this page require Django admin staff access — every route is wrapped in admin.site.admin_view().
Page Layout
The page is divided into the following sections, in order:
- Parcel Background Jobs — conditionally shown; lists the 10 most recent parcel import jobs with status and row counters. Refresh the page to update.
- Dataset Cards — side-by-side export/import cards for every standard dataset.
- Parcel Schema Export / Import — JSON transfer of a
ParcelSchemaProfilebetween environments. - Migration Ledger Export / Import — full-instance data migration via a deterministic JSON snapshot.
- Database Manifest Export — export-only manifest of every Pie model/table.
- Audit Logs Export — export-only full audit log.
- Parameter Manifest Export — export-only field-level schema manifest.
- Parcel Sections (bottom of page):
- Parcels.csv Import (Upload File)
- Parcels.csv Import (Azure Blob URL)
- Dump Parcel Records
Parcel Background Jobs
When any parcel import job is active or has run recently, a status table appears at the top of the page showing:
| Column | Description |
|---|---|
| Action | VALIDATE, APPLY, BLOB_IMPORT, DUMP_PARCELS, or PURGE_RAW_ROWS |
| Status | QUEUED, RUNNING, SUCCESS, or FAILED |
| Batch | The ParcelImportBatch ID the job is working on (or -- for non-batch jobs) |
| Progress | current / total row count |
| Message | Latest progress message from the worker |
Progress does not auto-refresh — reload the page to get the latest status.
Standard Dataset Cards
For each dataset below, there is an Export card (downloads a CSV immediately) and an Import card (uploads a CSV and applies it). Datasets with extra fields require additional form inputs before import.
| Dataset | Export File | Import Behavior | Extra Fields |
|---|---|---|---|
| Appeals | Appeals.csv | Overwrites existing appeal records by appeal_id | — |
| Hearings | Hearings.csv | Replaces the hearings table; appeal_id column must reference valid Appeal IDs | — |
| Documents | Documents.csv | Replaces the documents table | — |
| Document Types | DocumentTypes.csv | Replaces the document types table | — |
| Meetings | Meetings.csv | Replaces the meetings table | — |
| Meeting Items | MeetingAppeal.csv | Replaces the MeetingAppeal table | — |
| Board Actions | Motion.csv | Replaces the Motion table | — |
| Contacts | Contacts.csv | Overwrites all existing contacts | — |
| Appeal Parties | AppealParties.csv | Replaces the appeal parties table | — |
| Action Types | ActionTypes.csv | Replaces the action types table | — |
| Services | Services.csv | Replaces the Services table by service_code; deletes rows not in the file | — |
| Prior Determination Accounts | PriorDeterminations.csv | Replaces all prior determination records for the selected year. Accepts PreviousAppeals.csv (with AppealNumber, AccountNumber, ReasonCode) or BlackList.csv (with AppealNumber, AccountNumber). Rows with ReasonCode 99 are skipped. | Tax year |
| Schedule Slot | ScheduleSlot.csv | Replaces the schedule slots table | — |
| Error References | ErrorReferences.csv | Upserts records by error_reference_code | — |
| Communications | Communications.csv | Creates and updates communication records | — |
| CommunicationSubjects | CommunicationSubjects.csv | Creates and updates communication subject rows | — |
Export-only datasets (no import card):
| Dataset | Export File | Purpose |
|---|---|---|
| Audit Logs | AuditLogs.csv | Full audit log for review or archiving |
| Database Manifest | DatabaseManifest.csv | Lists every model in Pie |
| Parameter Manifest | ParameterManifest.csv | Field-level manifest for every model/table including data types, nullability, and PK/FK linkage metadata |
Parcel Schema Export / Import
Transfers a ParcelSchemaProfile (column mappings) between Pie environments as a JSON file.
Export
- Select a profile from the dropdown and click Download ParcelSchemaProfile.json.
- Produces a
pie.parcel-schema-profileJSON file containing the profile settings and all column mappings. - URL:
POST /admin/data-tools/export/parcel-schema/
Import
- Upload a previously exported
.jsonfile and click Import ParcelSchemaProfile.json. - Matches by profile name:
- If the name does not exist, a new profile is created.
- If the name already exists, vendor/description/active are updated and all existing column mappings are deleted and replaced with the imported ones.
- URL:
POST /admin/data-tools/import/parcel-schema/
See parcel_schema_profile.md for the full reference on schema profiles.
Migration Ledger Export / Import
The Migration Ledger is a full-instance data snapshot in JSON format, designed for moving an entire Pie configuration from one environment to another (e.g., local → staging → production).
Export
- Click Download MigrationLedger.json.
- Produces a timestamped
MigrationLedger_YYYYMMDD_HHMMSS.jsonfile. - Excludes: audit log records, parcel staging tables (
ParcelRawRow,ParcelImportBatch,ParcelImportJob), and document file bytes (document rows include metadata only, not the binary file content). - URL:
GET /admin/data-tools/export/migration-ledger/
Import
- Upload a
MigrationLedger.jsonfile and click Import MigrationLedger.json. - The import is:
- Dependency-aware — processes models in foreign key dependency order.
- Primary-key-preserving — records are created with their original IDs.
- Many-to-many aware — M2M links are applied after all base rows are written.
- Idempotent — rows are created or updated, not duplicated.
- After import, Django messages report: models processed, created, updated, skipped, and error counts. Warnings and row errors (up to 8 each) are shown inline.
- URL:
POST /admin/data-tools/import/migration-ledger/
Parcels.csv Import (Upload File)
For importing a local parcel CSV directly through the browser.
Form fields:
- Tax year — the assessment year for this import (e.g.,
2026) - Schema profile — the
ParcelSchemaProfilethat maps the CSV column headers to canonical Parcel fields - File — the
.csvfile to upload
What happens on submit:
- The file is uploaded and a
ParcelImportBatchis created. - A background
ParcelImportJobwith actionAPPLYis queued (validate + apply runs automatically). - The
parcel_import_workerprocess must be running to pick up the job.
URL: POST /admin/data-tools/import/parcels/
For large files that may cause HTTP timeouts, use the Azure Blob URL method below instead.
Parcels.csv Import (Azure Blob URL)
For importing a parcel CSV that has been pre-uploaded to Azure Blob Storage. This method runs entirely in the background and avoids browser timeout issues.
Steps shown on the page:
- Upload the CSV to the
doccontainer in Azure Storage. - Copy the blob URL (starts with
https://caliperdocuments.blob.core.windows.net/doc/). - Fill in the form and click Import from Blob.
- Monitor progress in the Parcel Background Jobs section by refreshing the page.
- For detailed status and row counts, go to Parcel Teleporter (Start Import) (the
ParcelImportBatchadmin list).
Form fields:
- Blob URL — full HTTPS URL to the blob (e.g.,
https://caliperdocuments.blob.core.windows.net/doc/imports/Parcels_2026.csv) - Tax year — the assessment year
- Schema profile — the active
ParcelSchemaProfileto use
What happens on submit:
- URL is validated (must be
https://, must include container/blob path). - A
ParcelImportJobwith actionBLOB_IMPORTis queued. - The worker downloads the file from blob storage, runs ingest → validate → apply automatically.
- Job status is stored in the session and shown at the top of the page on next load.
URL: POST /admin/data-tools/import/parcels-blob/
Dump Parcel Records
Wipes all parcel data from the database — Parcel, ParcelImportBatch, and ParcelRawRow records.
- Clicking Dump Parcel Records opens a confirmation page (
/admin/data-tools/dump-parcels/) that shows current record counts. - The confirmation page requires typing
deletebefore the action proceeds. - Blocked if any Appeals exist — the dump cannot run while Appeal records are present.
- On confirmation, queues a
ParcelImportJobwith actionDUMP_PARCELSand logs an audit entry. - The actual deletion is executed by the
parcel_import_workerbackground process.
This is irreversible. Use only when preparing to re-import a clean parcel dataset from scratch.
Access Control
Every URL under /admin/data-tools/ is wrapped in admin.site.admin_view(), which enforces Django admin staff authentication. Non-staff users are redirected to the admin login page.
URL Reference
| Action | Method | URL |
|---|---|---|
| Data Tools page | GET | /admin/data-tools/ |
| Export Appeals | GET | /admin/data-tools/export/appeals/ |
| Import Appeals | POST | /admin/data-tools/import/appeals/ |
| Export Hearings | GET | /admin/data-tools/export/hearings/ |
| Import Hearings | POST | /admin/data-tools/import/hearings/ |
| Export Documents | GET | /admin/data-tools/export/documents/ |
| Import Documents | POST | /admin/data-tools/import/documents/ |
| Export Document Types | GET | /admin/data-tools/export/document-types/ |
| Import Document Types | POST | /admin/data-tools/import/document-types/ |
| Export Meetings | GET | /admin/data-tools/export/meetings/ |
| Import Meetings | POST | /admin/data-tools/import/meetings/ |
| Export Meeting Items | GET | /admin/data-tools/export/meetingappeals/ |
| Import Meeting Items | POST | /admin/data-tools/import/meetingappeals/ |
| Export Board Actions | GET | /admin/data-tools/export/motions/ |
| Import Board Actions | POST | /admin/data-tools/import/motions/ |
| Export Contacts | GET | /admin/data-tools/export/contacts/ |
| Import Contacts | POST | /admin/data-tools/import/contacts/ |
| Export Appeal Parties | GET | /admin/data-tools/export/appeal-parties/ |
| Import Appeal Parties | POST | /admin/data-tools/import/appeal-parties/ |
| Export Action Types | GET | /admin/data-tools/export/action-types/ |
| Import Action Types | POST | /admin/data-tools/import/action-types/ |
| Export Services | GET | /admin/data-tools/export/services/ |
| Import Services | POST | /admin/data-tools/import/services/ |
| Export Prior Determinations | GET | /admin/data-tools/export/prior-determinations/ |
| Import Prior Determinations | POST | /admin/data-tools/import/prior-determinations/ |
| Export Schedule Slots | GET | /admin/data-tools/export/scheduleslots/ |
| Import Schedule Slots | POST | /admin/data-tools/import/scheduleslots/ |
| Export Error References | GET | /admin/data-tools/export/error-references/ |
| Import Error References | POST | /admin/data-tools/import/error-references/ |
| Export Communications | GET | /admin/data-tools/export/communications/ |
| Import Communications | POST | /admin/data-tools/import/communications/ |
| Export Communication Subjects | GET | /admin/data-tools/export/communication-subjects/ |
| Import Communication Subjects | POST | /admin/data-tools/import/communication-subjects/ |
| Export Audit Logs | GET | /admin/data-tools/export/audit-logs/ |
| Export Database Manifest | GET | /admin/data-tools/export/database-manifest/ |
| Export Parameter Manifest | GET | /admin/data-tools/export/parameter-manifest/ |
| Export Parcels CSV | GET | /admin/data-tools/export/parcels/ |
| Import Parcels (file upload) | POST | /admin/data-tools/import/parcels/ |
| Import Parcels (blob URL) | POST | /admin/data-tools/import/parcels-blob/ |
| Export Parcel Schema Profile | POST | /admin/data-tools/export/parcel-schema/ |
| Import Parcel Schema Profile | POST | /admin/data-tools/import/parcel-schema/ |
| Export Migration Ledger | GET | /admin/data-tools/export/migration-ledger/ |
| Import Migration Ledger | POST | /admin/data-tools/import/migration-ledger/ |
| Dump Parcel Records (confirm) | GET/POST | /admin/data-tools/dump-parcels/ |
Key Code Locations
| Path | Description |
|---|---|
| appeals/views.py | data_tools() view and all export/import view functions |
| appealsys/urls.py | All /admin/data-tools/ URL patterns |
| templates/admin/data_tools.html | Page template |
| appeals/services/parcel_import_pipeline.py | Pipeline called by parcel import views |
| appeals/services/migration_ledger.py | export_migration_ledger_json() and import_migration_ledger_payload() |
Parcel Column Import Manager (/admin/appeals/parcelschemaprofile/)
What Is This Page?
The Parcel Column Import Manager is the admin section for creating and editing ParcelSchemaProfile records. A schema profile is the translation table that tells Pie how to read a vendor's CSV export — mapping each source column header to one of Pie's canonical parcel fields and specifying how the raw value should be parsed.
Every parcel import requires a profile. This page is where profiles are built and maintained.
Access: Visible and accessible to superusers only. Staff users cannot see this section in the navigation even if they have admin access. This is enforced by
get_model_perms()inParcelSchemaProfileAdmin.
List View
URL: /admin/appeals/parcelschemaprofile/
Columns
| Column | Description |
|---|---|
| Name | The profile identifier — used to select the profile during import |
| Vendor | The data source platform (Esri / ArcGIS) |
| Active | Whether the profile appears in import dropdowns on Data Tools and Validate Importations |
Filters
- Vendor — filter by data platform
- Active — filter by active/inactive status
Search
Searches across name and description fields.
Action: Auto-fill Friendly Labels
Selecting one or more profiles and running Auto-fill Friendly Labels (leave existing values untouched) fills any blank friendly_label values on all column mappings for the selected profiles. Labels are generated by title-casing the canonical field name (e.g., res_assessed_value → Res Assessed Value). Mappings that already have a label are not touched.
This is a cosmetic operation — friendly labels are displayed in the UI but never used by the import pipeline.
Detail / Edit View
URL: /admin/appeals/parcelschemaprofile/<id>/change/
Profile Fields
| Field | Required | Description |
|---|---|---|
| Name | Yes | Unique identifier, max 128 characters. Used to select the profile at import time and to match by name during JSON import. |
| Vendor | Yes | Currently the only choice is Esri / ArcGIS. |
| Description | No | Free-text note — useful for recording which county or data source this profile covers. |
| Active | Yes | Controls visibility in import dropdowns. Inactive profiles are hidden from Data Tools and Validate Importations but remain in the database. |
Column Mappings Inline
Below the profile fields is the Column Mappings tabular inline. Each row maps one CSV column to one canonical Parcel field.
| Field | Required | Description |
|---|---|---|
| Source Field | Yes | Exact CSV column header as it appears in the file, including case and any underscores. Max 255 characters. Must be unique within the profile. |
| Canonical Field | Yes | The Pie parcel field this column maps to. Selected from a fixed dropdown — see full list below. |
| Required | — | If checked, rows missing this field (or where the field is blank after transformation) are marked invalid during validation and not applied. At minimum, account should be required. |
| Transform | Yes | How the raw string value is parsed before storing — see transforms below. |
| Friendly Label | No | Display-only label shown in the UI. Does not affect import logic in any way. Can be filled automatically using the Auto-fill Friendly Labels action. |
Rows are ordered by profile, then source field alphabetically.
Canonical Fields
The complete list of fields a column mapping can target:
| Canonical Field | Label | Notes |
|---|---|---|
account | Account | Primary identifier — always required |
parcel_id | Parcel ID | |
owner_name | Owner Name | |
situs_address | Situs Address | Physical property address |
mailing_address | Mailing Address | Full mailing address string |
municipality | Municipality | Taxing jurisdiction / city |
situs_zip | Situs ZIP | |
assessor_total_assessed_value | Assessor Total Assessed Value | Auto-computed from res+com+ag if not mapped |
assessor_total_market_value | Assessor Total Market Value | |
res_assessed_value | Residential Assessed Value | |
res_improvement_value | Residential Improvement Value | |
res_land_value | Residential Land Value | |
com_assessed_value | Commercial Assessed Value | |
com_improvement_value | Commercial Improvement Value | |
com_land_value | Commercial Land Value | |
ag_assessed_value | Agricultural Assessed Value | |
ag_improvement_value | Agricultural Improvement Value | |
ag_land_value | Agricultural Land Value | |
prev_owner_1 | Previous Owner 1 | |
sale_price_1 | Sale Price 1 | |
sale_date_1 | Sale Date 1 | |
parcel_report_url | Parcel Report URL | URL to external assessor parcel detail page |
assessor_total_assessed_valueis automatically computed by the pipeline asres_assessed_value + com_assessed_value + ag_assessed_valueif it is not present in the CSV or the mapped value is blank. You do not need to map it unless the source file provides it directly.
Transforms
| Transform | Behavior |
|---|---|
identity | Raw string value, no change |
strip | Strip leading and trailing whitespace |
upper | Convert to uppercase |
lower | Convert to lowercase |
decimal | Parse as decimal number — handles comma-formatted values (e.g., 1,234,567.00) |
int | Parse as integer |
date | Parse common date formats: YYYY-MM-DD, MM/DD/YYYY, MM/DD/YY, YYYY/MM/DD; strips Esri timezone suffixes (e.g., 2026-01-15T00:00:00.000Z) |
date_epoch_ms | Convert integer epoch milliseconds to a date — used for date columns from Esri feature service REST exports |
Constraints and Validation
- Name is unique across all profiles — two profiles cannot share the same name.
- Source field is unique within a profile — a given CSV column can only be mapped once per profile. Attempting to add a duplicate
source_fieldon the same profile raises a validation error. - The import pipeline only applies mappings to columns it finds in the CSV header. Mappings for columns not present in the file are silently skipped (unless the mapping is marked Required, in which case the row is marked invalid).
- Mappings for legacy canonical fields (
objectid,situs_name) that have been removed from the Parcel model are silently dropped during JSON export and import. They cannot be added via the admin dropdown as they are not in theCANONICAL_FIELDSlist.
Relationship to Imports
When an import runs (via Data Tools, Validate Importations, or CLI), the selected profile is loaded by _load_field_map() in the pipeline service. The pipeline:
- Reads the
ParcelFieldMaprows for the selected profile. - For each CSV row, looks up each column header in the field map.
- Applies the specified transform to the raw value.
- Stores the result under the canonical field name in the normalized JSON.
- Checks required fields — marks the row invalid if any are missing or unparseable.
The profile is stored as a foreign key on the ParcelImportBatch record (schema_profile), so it is always possible to see which profile was used for a given import.
Export and Import via JSON
Profiles can be transferred between environments without manual re-entry using the JSON export/import on Data Tools.
- Export: Data Tools → Parcel Schema Export → select profile → Download ParcelSchemaProfile.json
- Import: Data Tools → Parcel Schema Import → upload
.jsonfile
The import matches by profile name and replaces all column mappings atomically. See parcel_schema_profile.md for the full JSON format reference.
Key Code Locations
| Path | Description |
|---|---|
| appeals/admin.py:266 | ParcelFieldMapInline — inline column mapping editor |
| appeals/admin.py:282 | ParcelSchemaProfileAdmin — list display, filters, actions, superuser-only gate |
| appeals/models.py:790 | ParcelSchemaProfile model |
| appeals/models.py:809 | CANONICAL_FIELDS — authoritative list of valid canonical field names |
| appeals/models.py:835 | ParcelTransform — transform choices |
| appeals/models.py:846 | ParcelFieldMap model |
| appeals/services/parcel_import_pipeline.py | _load_field_map(), _transform_value() — where the profile is consumed |
| appeals/views.py:1119 | LEGACY_PARCEL_CANONICAL_FIELD_ALIASES — dropped legacy field names |
| appeals/views.py:1135 | _parcel_schema_export_payload() — JSON export logic |
| appeals/views.py:1324 | import_parcel_schema_profile() — JSON import logic |
Parcel Import Jobs (/admin/appeals/parcelimportjob/)
What Is This Page?
Parcel Import Jobs is the admin section for monitoring and inspecting background jobs in the parcel import pipeline. Every validate, apply, blob import, dump, and raw-row purge operation runs as a ParcelImportJob record. This page is the authoritative history of all background parcel operations — what ran, when, who queued it, and what happened.
Jobs are created (queued) by:
- Data Tools — Parcels.csv Import (file upload or Azure Blob URL)
- Data Tools — Dump Parcel Records
- Validate Importations — Queue Validation / Queue Apply buttons and bulk actions
- Parcel Row Import Manager — Queue background purge action
The parcel_import_worker management command is the process that picks up and executes queued jobs. Nothing runs unless the worker is running.
Access: Viewable by all staff users. No staff user can add or manually edit a job — the admin is entirely read-only. Jobs are created only through the queue system.
List View
URL: /admin/appeals/parcelimportjob/
Jobs are ordered newest first (-created_at).
Columns
| Column | Description |
|---|---|
| ID | Auto-assigned job identifier |
| Action | What the job does — see action types below |
| Status | Current lifecycle state — see statuses below |
| Batch | The ParcelImportBatch this job operates on, if applicable. Blank for DUMP_PARCELS and PURGE_RAW_ROWS |
| Progress Current | Rows processed so far |
| Progress Total | Total rows to process |
| Progress Message | Latest message from the worker (e.g., Validating, Applying, Completed, Failed) |
| Created At | When the job was queued |
| Started At | When the worker claimed and began the job |
| Finished At | When the job completed or failed |
Filters
- Action — filter by job type
- Status — filter by lifecycle state
Search
Searches by job ID or batch ID.
Action Types
| Action | What It Does |
|---|---|
VALIDATE | Runs the validate/normalize phase on a batch — applies field map transforms, marks each ParcelRawRow as valid or invalid, updates batch row counts |
APPLY | Runs the apply/upsert phase — writes valid rows to Parcel records. Auto-validates first if the batch is not yet in VALIDATED status |
BLOB_IMPORT | Downloads a CSV from Azure Blob Storage, then runs ingest → validate → apply as a single background operation |
DUMP_PARCELS | Deletes all Parcel, ParcelRawRow, and ParcelImportBatch records in chunked batches. Blocked if any Appeals exist |
PURGE_RAW_ROWS | Deletes ParcelRawRow records for one or more specific batches. Leaves the Parcel and ParcelImportBatch records intact |
Job Statuses
| Status | Meaning |
|---|---|
QUEUED | Created and waiting for the worker to pick it up |
RUNNING | Claimed by the worker; currently executing |
SUCCESS | Completed without error. progress_message = Completed |
FAILED | An exception was raised. last_error contains the exception message |
Status transitions are always: QUEUED → RUNNING → SUCCESS or FAILED. There is no retry — a failed job remains failed. To retry, a new job must be queued (e.g., by clicking Queue Validation or Queue Apply again on the batch).
Detail View
URL: /admin/appeals/parcelimportjob/<id>/change/
All fields are read-only. No field can be edited through the admin.
Fields
| Field | Description |
|---|---|
| Action | Job type |
| Status | Lifecycle state |
| Batch | Link to the associated ParcelImportBatch, if any |
| Created By | The staff user who triggered the job. Blank for jobs queued by the worker itself (e.g., blob imports creating their own batch mid-run) |
| Payload | JSON object with action-specific parameters — see payload details below |
| Progress Current / Total | Row-level progress counters |
| Progress Message | Latest status message |
| Last Error | Exception message if the job failed. Empty on success |
| Started At | Timestamp when the worker claimed the job |
| Finished At | Timestamp when the job completed or failed |
| Created At / Updated At | Record timestamps |
Payload Contents by Action
| Action | Payload Fields |
|---|---|
VALIDATE | Typically empty — batch reference is stored in the batch FK |
APPLY | Typically empty — batch reference is stored in the batch FK |
BLOB_IMPORT | blob_url, tax_year, profile_name |
DUMP_PARCELS | Populated after completion: deleted_parcels, deleted_batches, deleted_raw_rows |
PURGE_RAW_ROWS | import_batch_ids (list), requested_rows, dedupe_key; populated after completion: deleted_raw_rows |
Job Deduplication
The queue system prevents redundant jobs from piling up. When a job is queued via queue_parcel_job():
- For batch-linked jobs (
VALIDATE,APPLY): if a job with the same action and batch is alreadyQUEUEDorRUNNING, the existing job is returned and no new record is created. - For non-batch jobs (
DUMP_PARCELS,PURGE_RAW_ROWS): deduplication is bydedupe_keyin the payload. If a job with the same action, no batch, and matchingdedupe_keyis already active, the existing job is returned.
This means clicking Queue Validation twice for the same batch will not create two jobs — the second click is silently a no-op.
The Worker Process
Jobs do not execute automatically — they require the parcel_import_worker management command to be running as a long-lived daemon process.
python manage.py parcel_import_worker
How the worker operates:
- Polls for
QUEUEDjobs every 5 seconds (default; configurable with--sleep N). - Claims the oldest queued job using
SELECT FOR UPDATE SKIP LOCKEDto prevent concurrent workers from claiming the same job. - Transitions the job to
RUNNINGand recordsstarted_at. - Executes the action.
- On success: sets status to
SUCCESS,progress_message=Completed, recordsfinished_at. - On exception: sets status to
FAILED, writes the exception tolast_error, recordsfinished_at.
Progress updates are written to the job record every 250 rows or every 2 seconds (throttled by the ProgressTracker in the pipeline service). Refreshing the list view or detail view shows the latest progress.
Single-run mode (useful for testing or one-off manual execution):
python manage.py parcel_import_worker --once
What to Check When a Job Fails
- Open the failed job's detail view.
- Read Last Error — this is the raw exception message from the worker.
- Common failure causes:
| Error | Cause |
|---|---|
Profile '<name>' not found | The schema profile named in the blob import payload no longer exists or was renamed |
Azure credentials are not configured | AZURE_STORAGE_CONNECTION_STRING (or AZURE_STORAGE_ACCOUNT_NAME + AZURE_STORAGE_KEY) env vars are missing |
Blob URL must include container and blob path | The blob URL submitted was malformed |
Parcel dump blocked while N appeal(s) exist | Dump cannot run while any Appeal records exist — delete appeals first |
Validate job requires a batch / Apply job requires a batch | The associated ParcelImportBatch was deleted before the job ran |
Raw row purge job requires at least one import batch id | The PURGE_RAW_ROWS payload had no valid batch IDs |
- To retry after fixing the root cause, re-queue a new job from the batch detail page or Data Tools — do not attempt to edit the failed job record.
Relationship to Other Pages
| Page | Relationship |
|---|---|
| Validate Importations | Batch detail page shows the latest job's progress; Queue Validation and Queue Apply buttons create VALIDATE / APPLY jobs |
| Data Tools | Shows last 10 jobs in the Parcel Background Jobs table; file upload and blob URL forms create APPLY / BLOB_IMPORT jobs; Dump Parcel Records creates a DUMP_PARCELS job |
| Parcel Row Import Manager | Queue background purge action creates PURGE_RAW_ROWS jobs |
Key Code Locations
| Path | Description |
|---|---|
| appeals/models.py:913 | ParcelImportJob model — action/status choices, payload, progress fields |
| appeals/admin.py:557 | ParcelImportJobAdmin — read-only list and detail view |
| appeals/services/parcel_import_jobs.py | queue_parcel_job() — deduplication logic and job creation |
| appeals/management/commands/parcel_import_worker.py | Worker daemon — _claim_next_job(), _run_job(), all action handlers |
| appeals/services/parcel_import_pipeline.py | validate_batch(), apply_batch() — called by the worker |
Parcel Row Import Manager (/admin/appeals/parcelrawrow/)
What Is This Page?
The Parcel Row Import Manager is the admin section for inspecting individual ParcelRawRow records — one record per CSV row per import batch. Every row ingested from a parcel CSV is stored here in its original and normalized forms, along with any validation errors produced during the validate phase.
This page is primarily a diagnostic tool. When an import has invalid rows, this is where you come to see exactly what was in the source data, what the pipeline produced after applying transforms, and which specific validation errors caused a row to be rejected.
Access: Visible to superusers only. Regular staff users cannot see this section in the navigation. This is enforced by
get_model_perms()inParcelRawRowAdmin.
List View
URL: /admin/appeals/parcelrawrow/
Rows are ordered by row_number (ascending) within each batch.
Columns
| Column | Description |
|---|---|
| Import Batch | The ParcelImportBatch this row belongs to (links to the batch) |
| Row Number | The 1-based row position in the original CSV file |
| Normalized Key | The account value extracted from the row (populated during ingest from the raw account-mapped column if identifiable, otherwise blank). Indexed for fast lookup. |
| Valid | Boolean — whether the row passed validation. True rows are applied to Parcel records; False rows are not. |
Filters
- Valid — show only valid or only invalid rows
- Import Batch — filter to all rows for a specific batch
Search
Searches by normalized_key (account value).
Action: Queue Background Purge
Selecting rows and running Queue background purge for selected import batch(es) queues a PURGE_RAW_ROWS job for the import batches represented in the selection.
Important behavior:
- The purge operates at the batch level, not the row level. Selecting any row from a batch targets all raw rows for that entire batch — not just the selected rows.
- Deduplication prevents a second purge job from being queued if one is already
QUEUEDorRUNNINGfor the same batch set. - If a matching job already exists, the action reports the existing job ID and takes no further action.
- The default Django Delete selected action is removed — raw rows cannot be deleted individually through the admin. Deletion must go through the background purge job.
Detail View
URL: /admin/appeals/parcelrawrow/<id>/change/
All fields are read-only. No field can be edited.
Fields
| Field | Description |
|---|---|
| Import Batch | The ParcelImportBatch this row came from |
| Row Number | 1-based position in the source CSV |
| Normalized Key | Account value — the primary lookup key for this row |
| Valid | Whether the row passed validation |
| Raw | The original CSV row stored as a JSON object. Keys are the exact column headers from the CSV file; values are the unmodified strings from the source. This is never altered after ingest. |
| Normalized | The validated and transformed row as a JSON object. Keys are canonical field names (e.g., account, res_assessed_value); values are the parsed results after applying the schema profile's transforms. Populated during the validate phase. null before validation. |
| Errors | A JSON list of error strings describing why the row failed validation. Empty list ([]) for valid rows. null before validation. |
How Raw Rows Are Created
During the ingest phase of the pipeline:
- The CSV is read line by line.
- Each row is stored as a
ParcelRawRowwithraw= the original CSV row dict andvalid = False. - Rows with a blank account field are skipped entirely — they produce no
ParcelRawRowrecord. - Rows are bulk-created in batches for efficiency.
At this point normalized, errors, and normalized_key are all unpopulated.
How Rows Are Validated
During the validate phase:
- The pipeline loads the schema profile's field maps.
- For each raw row, it applies the transform for each mapped column.
- The result is stored in
normalizedas a canonical-field-keyed JSON object. - Any field marked
requiredthat is missing or unparseable is recorded inerrors. validis set toTrueiferrorsis empty,Falseotherwise.normalized_keyis set to the account value extracted from the normalized output.
Reading the Raw and Normalized Fields
raw example
This is exactly what was in the CSV:
{
"Account": " 1234-5678 ",
"Owner": "SMITH JOHN",
"SiteAddress": "123 MAIN ST",
"ResidentialAssessedValue": "142,500.00",
"TotalMarketValue": "189,000.00"
}
normalized example
After applying the schema profile transforms (strip, strip, strip, decimal, decimal):
{
"account": "1234-5678",
"owner_name": "SMITH JOHN",
"situs_address": "123 MAIN ST",
"res_assessed_value": "142500.00",
"assessor_total_market_value": "189000.00"
}
errors example for an invalid row
[ "Required field 'account' is missing or blank." ]
or for a transform failure:
[ "Field 'res_assessed_value': cannot convert 'N/A' to decimal." ]
Raw Row Retention
Raw rows are retained indefinitely after import — they are not automatically deleted when a batch is applied. This is intentional: they serve as the permanent audit record of exactly what source data was imported.
To free database space after a batch has been applied and verified, use the Queue background purge action on this page, or the same action available from the batch detail page in Validate Importations. The PURGE_RAW_ROWS job deletes all ParcelRawRow records for the targeted batches without affecting the Parcel records that were created from them.
Relationship to Other Pages
| Page | Relationship |
|---|---|
| Validate Importations | Batch detail shows Top Errors and Invalid Rows (sample) derived from raw rows; each batch links back to its raw rows |
| Property Account Search | Each Parcel record has a raw_last_seen FK pointing to the most recent ParcelRawRow that populated it |
| Parcel Import Jobs | PURGE_RAW_ROWS jobs created here are visible and monitored on the Import Jobs page |
Key Code Locations
| Path | Description |
|---|---|
| appeals/models.py:1085 | ParcelRawRow model — all fields and indexes |
| appeals/admin.py:600 | ParcelRawRowAdmin — read-only list/detail, purge action, superuser gate, delete removal |
| appeals/services/parcel_import_pipeline.py | ingest_file() creates raw rows; validate_batch() populates normalized, errors, valid, normalized_key |
| appeals/services/parcel_import_jobs.py | queue_parcel_job() — used by the purge action to create the background job |
| appeals/management/commands/parcel_import_worker.py | _run_purge_raw_rows() — executes the chunked deletion |
Validate Importations (/admin/appeals/parcelimportbatch/)
What Is This Page?
Validate Importations is the admin section for managing ParcelImportBatch records. A batch represents one uploaded CSV file — it tracks the file itself, which tax year and schema profile it belongs to, its current pipeline status, and all row counts and errors produced during validation.
This page is the step-by-step path for importing parcels when uploading a file directly through the admin (as opposed to using the Azure Blob URL method on Data Tools). It is also where you monitor validation results and apply batches after reviewing them.
Access: Available to all staff users with standard Django admin permissions on the
ParcelImportBatchmodel.
List View
URL: /admin/appeals/parcelimportbatch/
Batches are ordered newest first (-uploaded_at).
Columns
| Column | Description |
|---|---|
| Original Filename | The name of the uploaded CSV file |
| Tax Year | The assessment year this batch was imported for |
| Status | Current pipeline state — see statuses below |
| Row Count Total | Total rows ingested (excludes blank-account rows) |
| Row Count Valid | Rows that passed validation |
| Row Count Invalid | Rows that failed validation |
| Schema Profile | The ParcelSchemaProfile used for this batch |
| Uploaded At | When the batch was created |
Filters
- Tax Year — filter to a specific assessment year
- Status — filter by pipeline state
- Schema Profile — filter by which profile was used
Search
Searches by original filename and file hash (SHA256).
Bulk Actions
| Action | Description |
|---|---|
| Validate selected batches (background) | Queues a VALIDATE job for each selected batch. Skips batches that already have a QUEUED or RUNNING validate job. |
| Apply selected batches (background) | Queues an APPLY job for each selected batch. The apply worker auto-validates first if the batch is not yet in VALIDATED status. Skips batches with an active job. |
Batch Statuses
| Status | Meaning |
|---|---|
UPLOADED | File has been ingested and raw rows stored. Validation has not run yet. |
VALIDATED | Validate phase complete. Row counts and error details are populated. Ready to apply. |
APPLYING | Apply phase is currently running. |
APPLIED | All valid rows have been upserted to Parcel records. |
FAILED | A pipeline error occurred. error_details contains the failure message. |
DUPLICATE | Upload was rejected because a batch with the same (tax_year, file_hash_sha256) already exists. |
Adding a New Batch (Upload)
URL: /admin/appeals/parcelimportbatch/add/
This is how you upload a parcel CSV directly through the admin. The upload form has the following fields:
| Field | Required | Description |
|---|---|---|
| Tax Year | Yes | The assessment year for this import. Combined with the file hash to enforce deduplication. |
| Schema Profile | No | The ParcelSchemaProfile to use for validation. Required in practice — without one the validate phase cannot map any columns. |
| Source Name | No | Free-text label for the data source (e.g., County Assessor Export 2026). Informational only. |
| Uploaded File | Yes | The CSV file to import. Must be non-empty. |
| Notes | No | Free-text notes about this batch. |
On save:
- The file's SHA256 hash is computed from the uploaded bytes.
- If a batch with the same
(tax_year, file_hash_sha256)already exists, the form raises a validation error and the upload is rejected. The error message includes the ID of the existing duplicate batch. - If accepted, the batch is saved with status
UPLOADEDand the file is stored undermedia/parcel_imports/<year>/. - The
uploaded_byfield is set to the current user automatically. - Raw rows are not created at this point — ingest runs when the first job (validate or apply) processes the batch.
After saving, proceed to the batch detail page to queue background jobs.
Batch Detail View
URL: /admin/appeals/parcelimportbatch/<id>/change/
Editable Fields (on existing batches)
Once a batch is saved, only schema_profile, source_name, and uploaded_file remain editable through the form. All status, count, and diagnostic fields are read-only.
Read-Only Fields
| Field | Description |
|---|---|
| Uploaded At | Timestamp of upload |
| Uploaded By | Staff user who created the batch |
| File Hash SHA256 | SHA256 of the uploaded file — used for deduplication |
| Row Count Total | Rows stored as ParcelRawRow records (populated after ingest) |
| Row Count Skipped Blank Account | Rows skipped during ingest because the account field was blank |
| Row Count Valid | Rows that passed validation (populated after validate phase) |
| Row Count Invalid | Rows that failed validation (populated after validate phase) |
| Status | Current pipeline state |
| Error Details | Pipeline-level error message if status is FAILED |
| Notes | Notes entered at upload time |
Background Actions (inline buttons)
Two buttons appear in the Background Actions read-only field:
- Queue Validation — queues a
VALIDATEjob for this batch. If a validate or apply job is alreadyQUEUEDorRUNNINGfor this batch, reports the existing job ID and does nothing. - Queue Apply — queues an
APPLYjob for this batch. The apply worker will auto-validate if needed before applying.
Both buttons redirect back to the batch detail page after queuing.
Latest Job Progress
Displays the most recently created job for this batch:
<Action> — <Status> | <current>/<total> (<percent>%) | <message>
Examples:
Validate — Running | 1500/8000 (18%) | ValidatingApply — Success | 8000/8000 (100%) | CompletedApply — Failed | 450/8000 (5%) | Failed
This field does not auto-refresh — reload the page to see updated progress.
Top Errors (sample)
Aggregates error messages from the first 200 invalid ParcelRawRow records, counts occurrences of each distinct message, and shows the top 20 most frequent errors in descending order.
Format: <count> × <error message>
Example:
142 × Required field 'account' is missing or blank. 37 × Field 'res_assessed_value': cannot convert 'N/A' to decimal. 12 × Field 'sale_date_1': unrecognized date format '00/00/0000'.
Use this to quickly identify systematic problems — a transform misconfiguration, a missing column mapping, or a data quality issue in the source file.
Invalid Rows (sample)
Shows the first 20 invalid rows with their row number and error list:
Row 14: ["Field 'res_assessed_value': cannot convert 'N/A' to decimal."] Row 23: ["Required field 'account' is missing or blank."]
For deeper inspection of individual rows (including the full raw and normalized JSON), navigate to Parcel Row Import Manager and filter by this batch.
Typical Workflow
- Upload — Add a new batch via the Add page. Select tax year, schema profile, and file. Save.
- Queue Validation — On the batch detail page, click Queue Validation. The
parcel_import_workerwill pick this up and run the validate phase. - Review — Reload the page to check Latest Job Progress, Row Count Valid / Invalid, Top Errors, and Invalid Rows (sample).
- Decide — If invalid rows are acceptable (e.g., a known subset of unmappable records), proceed to apply. If errors indicate a schema or data problem, fix the profile or re-export the CSV and upload again.
- Queue Apply — Click Queue Apply. The worker upserts all valid rows to
Parcelrecords. - Confirm — Verify in Property Account Search that parcel records exist and look correct.
Deduplication
A batch is rejected at upload time if a ParcelImportBatch record already exists with the same (tax_year, file_hash_sha256). The error message identifies the conflicting batch by ID.
To import the same file again legitimately:
- Use the CLI with
--forceto bypass the hash check. - Or modify the file (even a single whitespace change will produce a different hash).
- Or dump all parcel records first (if starting fresh).
Re-applying an already-applied batch is idempotent — the same valid rows simply update existing Parcel records in place.
Relationship to Other Pages
| Page | Relationship |
|---|---|
| Data Tools | Alternative upload path (file upload and blob URL); also shows last 10 jobs in the Parcel Background Jobs table |
| Parcel Import Jobs | Every Queue Validation / Queue Apply action creates a job visible here; full job history and error details |
| Parcel Row Import Manager | All ParcelRawRow records for a batch; full raw / normalized / errors inspection |
| Property Account Search | Parcel records created by applying this batch; each Parcel links back to its current_import_batch |
| Parcel Column Import Manager | Schema profiles available for selection on this page |
Key Code Locations
| Path | Description |
|---|---|
| appeals/models.py:867 | ParcelImportStatus choices |
| appeals/models.py:876 | ParcelImportBatch model — all fields, unique constraint, ordering |
| appeals/admin.py:316 | ParcelImportBatchAdmin — upload form, list display, read-only fields, queue buttons, bulk actions, error summary, invalid sample |
| appeals/admin.py:384 | save_model() — auto-sets uploaded_by |
| appeals/admin.py:390 | queue_actions() — Queue Validation / Queue Apply inline buttons |
| appeals/admin.py:404 | job_progress() — Latest Job Progress display |
| appeals/admin.py:524 | error_summary() — Top Errors aggregation logic |
| appeals/admin.py:544 | invalid_sample() — first 20 invalid rows |
| appeals/services/parcel_import_jobs.py | queue_parcel_job() — deduplication and job creation |
| appeals/services/parcel_import_pipeline.py | ingest_file(), validate_batch(), apply_batch() |
Login, User Management, and Permissions
Login Entry Points
Pie has two distinct login paths depending on the user's role.
Selector Page (/login/)
The root login selector at /login/ presents two buttons:
- Local Login →
/accounts/login/?next=/portal/— for portal (non-admin) users authenticating with a username and password. - Admin Login →
/admin/login/?next=/admin/— for staff and superusers accessing the Django admin.
Already-authenticated users are automatically redirected to their appropriate destination (/admin/ or /portal/) without seeing the selector.
Local / Portal Login (/accounts/login/)
Used by portal-only users. Renders the allauth local login form with:
- Username and password fields
- Local User Sign In button
- Continue with Microsoft button (only shown if Microsoft SSO is configured)
- Forgot password link
After successful login, RoleAwareAccountAdapter.get_login_redirect_url() sends admin-capable users to /admin/ and all others to /portal/.
Admin Login (/admin/login/)
Used by staff and superusers. Renders the Django admin login form with:
- Username and password fields
- Remember Me checkbox
- Admin User Sign In button
- Continue with Microsoft button (only shown if Microsoft SSO is configured)
Remember Me behavior:
- Checked → session persists for
SESSION_COOKIE_AGE(default Django value: 2 weeks) - Unchecked → session expires when the browser is closed
If an authenticated user who does not meet admin access requirements hits this URL, they are redirected to /portal/.
Microsoft SSO (/accounts/microsoft/login/)
Both login forms show a Continue with Microsoft button when Microsoft SSO is configured. Clicking it initiates an OAuth2 flow via the allauth Microsoft social provider.
How it works:
- The user is redirected to Microsoft's login page.
- Microsoft authenticates the user and returns an email address.
- Pie's
MicrosoftSocialAccountAdapterintercepts the callback and checks:- The email returned by Microsoft is non-empty.
- A Django user with that email already exists in Pie's database.
- That user's account is active (
is_active = True).
- If all checks pass, the Microsoft account is linked to the existing Django user and the login proceeds.
- If any check fails, the user is redirected back to
/login/with an error message.
Microsoft SSO does not create new users. The Django user account must exist first. Self-signup is disabled (
is_open_for_signupreturnsFalse).
The next parameter controls post-login destination:
- Portal-bound:
/accounts/microsoft/login/?process=login&next=%2Fportal%2F - Admin-bound:
/accounts/microsoft/login/?process=login&next=%2Fadmin%2F
Setting Up Microsoft SSO for a User
Step 1 — Verify environment variables are set
Microsoft SSO is only active when both of the following environment variables are set on the server (Azure Web App Application Settings):
| Variable | Description |
|---|---|
MICROSOFT_CLIENT_ID | Azure AD app registration Client ID |
MICROSOFT_CLIENT_SECRET | Azure AD app registration Client Secret |
MICROSOFT_TENANT | Azure AD tenant ID or common (default: common) |
The Continue with Microsoft button will not appear on login pages unless both MICROSOFT_CLIENT_ID and MICROSOFT_CLIENT_SECRET are set.
Step 2 — Create the Django user account first
A Pie user account must exist before Microsoft SSO can be used. Create the user in the Django admin:
- Go to Django admin → Authentication and Authorization → Users → Add User.
- Set a username (can be anything — the email address is recommended for clarity).
- Set a temporary password (the user will never need to use it if they always log in via Microsoft).
- Click Save and continue editing.
- Fill in the user's Email address — this must exactly match the email in their Microsoft account (case-insensitive).
- Set
Active= checked. - Assign the appropriate staff status and group (see Roles and Permissions below).
- Save.
Step 3 — Link the Microsoft account (automatic on first sign-in)
The user's Microsoft account is linked to their Pie account automatically on their first successful Microsoft SSO login. No manual linking step is required — MicrosoftSocialAccountAdapter.pre_social_login() calls sociallogin.connect(request, user) on every login, which creates or updates the SocialAccount link.
Step 4 — Verify the link (optional)
Linked social accounts can be inspected in the Django admin under Social Accounts → Social accounts. Each linked account shows the provider (microsoft), the associated Django user, and the last login timestamp.
User Roles and Permissions
Pie uses three distinct access tiers, controlled by Django's built-in is_superuser, is_staff, and groups flags.
Superuser
is_superuser = True- Full, unrestricted access to all of Django admin — all models, all actions.
- Bypasses all group checks.
- Always passes
user_can_access_admin(). - Can access all admin sections hidden from regular staff (e.g., Parcel Column Import Manager, Parcel Row Import Manager, certain raw import pipeline models).
When to use: System administrators and developers only.
Staff (Admin Portal User)
is_staff = TrueAND member of theAdminPortalgroup- Can access Django admin (
/admin/). - Subject to Django's standard object-level permissions — only sees and can edit models their permissions allow.
- Cannot access superuser-only sections.
When to use: Board of equalization staff who use the admin to manage appeals, hearings, meetings, and documents.
Access rule: Both conditions must be true. is_staff = True alone is not sufficient — the user must also be in the AdminPortal group. The group name is configured via the ADMIN_PORTAL_GROUP_NAME environment variable (defaults to AdminPortal).
Portal User
is_staff = False,is_superuser = False- No access to Django admin.
- Can log in at
/accounts/login/and access/portal/only. - Sees appeal status and related information for their own appeals.
When to use: Property owners or representatives who filed an appeal and need read-only portal access.
Creating and Managing Users
Create a new user
- Django admin → Authentication and Authorization → Users → Add User.
- Enter username and password. Click Save and continue editing.
- Fill in:
- Email — required for Microsoft SSO; must match their Microsoft account email exactly.
- First name / Last name — optional but recommended.
- Active — must be checked for the user to be able to log in.
- Staff status — check this for admin users.
- Superuser status — check only for system administrators.
- Under Groups, add the user to
AdminPortalfor admin access. - Under User permissions, add individual Django model permissions if needed (usually handled through groups instead).
- Save.
Deactivate a user
Set Active = unchecked. The user cannot log in via password or Microsoft SSO. Their data and audit history are preserved.
Reset a password
In the user's admin edit page, use the Change password link. The self-service password reset at /accounts/password/reset/ is currently out of service — staff must reset passwords manually through the admin.
Delete a user
Deleting a user is permanent and will break any foreign key references to that user. Deactivating is preferred.
Managing Groups and Permissions
Groups in Django are collections of permissions assigned to users collectively. In Pie, the AdminPortal group is the primary mechanism for granting admin access to staff users.
Create or edit the AdminPortal group
- Django admin → Authentication and Authorization → Groups.
- Click AdminPortal (or Add Group if it does not exist yet).
- Set the group Name to
AdminPortal(must match theADMIN_PORTAL_GROUP_NAMEsetting exactly). - Under Permissions, add the Django model permissions the group's members should have (e.g.,
appeals | appeal | Can add appeal,appeals | appeal | Can change appeal, etc.). - Save.
Add a user to the AdminPortal group
On the user's edit page, scroll to Groups and move AdminPortal to the Chosen groups list. Save.
Or from the Group page, users can be viewed but not assigned directly — use the User edit page for group assignment.
Additional groups
You can create additional groups for more granular permission segmentation (e.g., a read-only staff group). Assign those groups to users in the same way. Pie does not enforce any specific secondary group names beyond AdminPortal for admin gate access.
Access Policy Summary
| User state | Can access /admin/ | Can access /portal/ |
|---|---|---|
is_superuser = True | Yes | Yes (redirected to /admin/) |
is_staff = True + AdminPortal group | Yes | Yes (redirected to /admin/) |
is_staff = True, not in AdminPortal group | No | Yes |
is_staff = False, is_superuser = False | No | Yes |
is_active = False | No | No |
The admin access check is enforced by user_can_access_admin() in appeals/auth_roles.py, which patches admin.site.has_permission globally via enforce_admin_site_access_policy().
Audit Logging
All login activity is recorded automatically:
- Successful login — logged by
audit_user_login()in appeals/signals.py. Records user ID, username, email, IP address, path, and group membership at login time. - Failed login attempt — logged by
audit_user_login_failed(). Records the attempted username and IP. Passwords are never captured. - Logout — logged by
audit_user_logout().
Audit records are viewable in Django admin → Audit Logs, or exportable via Data Tools → Audit Logs Export.
Key Code Locations
| Path | Description |
|---|---|
| appeals/auth_roles.py | user_can_access_admin(), enforce_admin_site_access_policy(), role-aware redirect logic |
| appeals/adapters.py | MicrosoftSocialAccountAdapter — blocks unknown/inactive users from Microsoft SSO |
| appeals/account_adapters.py | RoleAwareAccountAdapter — post-login redirect by role, safe URL enforcement |
| appeals/views.py | admin_login_view(), user_login_view(), portal_home(), _microsoft_sso_configured() |
| appeals/signals.py | audit_user_login(), audit_user_login_failed(), audit_user_logout() |
| appealsys/settings.py | AUTHENTICATION_BACKENDS, ACCOUNT_ADAPTER, SOCIALACCOUNT_ADAPTER, ADMIN_PORTAL_GROUP_NAME, Microsoft SSO env vars |
| appealsys/urls.py | URL wiring for /login/, /admin/login/, /accounts/, /portal/ |