Create resource_extra_fields_export.py#47
Conversation
Python script to export extra fields of items in a specific resource category
WalkthroughA new example script exports items from an eLabFTW category to CSV and XLSX. It configures an API client with host/API key, fetches items, normalizes and orders Changes
Sequence Diagram(s)sequenceDiagram
participant Script as "Export Script"
participant ApiClient as "elabapi_python.ApiClient"
participant eLabAPI as "eLabFTW API"
participant FS as "Filesystem (CSV/XLSX)"
Script->>ApiClient: configure(API_HOST_URL, API_KEY)
Script->>ApiClient: request items for CATEGORY_ID
ApiClient->>eLabAPI: GET /items?category=CATEGORY_ID
eLabAPI-->>ApiClient: items JSON
ApiClient-->>Script: deliver items
Script->>Script: parse items, normalize extra_fields, apply FIELD_ORDER
Script->>FS: write UTF-8 CSV (headers + rows)
Script->>FS: create XLSX, write headers (bold), rows, autofilter, autosize
Script-->>FS: save files
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
examples/resource_extra_fields_export.py (2)
121-131: Recommend wrapping execution in a main guard, Ensign.Running script logic at module level prevents reuse and testability. The Federation standard is to use
if __name__ == "__main__":to encapsulate execution logic.♻️ Proposed refactor: Add main guard
+def main(): + print("Starting export...") + + # Retrieve all items from the specified category + try: + items = itemsApi.read_items(cat=CATEGORY_ID) + except Exception as e: + print(f"Failed to retrieve items: {e}") + return + + print(f"Items found: {len(items)}") + # ... rest of logic ... + +if __name__ == "__main__": + main()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/resource_extra_fields_export.py` around lines 121 - 131, The script runs top-level logic (prints and calls itemsApi.read_items with CATEGORY_ID) which prevents reuse and testing; move the execution into a main function (e.g., def main(): containing the print("Starting export..."), items = itemsApi.read_items(cat=CATEGORY_ID), and print(f"Items found: {len(items)}")) and then add the standard guard if __name__ == "__main__": main() so the module can be imported without side effects.
173-182: A minor course correction for modern Python idioms.The static analysis correctly notes that spread syntax is more elegant than concatenation. A small improvement, but precision matters on the bridge.
✨ Proposed refactor: Use spread operator
# Define final column structure -csv_columns = ["Ressourcen ID", "Titel"] + FIELD_ORDER +csv_columns = ["Ressourcen ID", "Titel", *FIELD_ORDER]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/resource_extra_fields_export.py` around lines 173 - 182, Replace the list concatenation used to build csv_columns with Python's list unpacking (spread) to produce a cleaner expression: update the csv_columns construction that currently uses ["Ressourcen ID", "Titel"] + FIELD_ORDER to use ["Ressourcen ID", "Titel", *FIELD_ORDER]; keep the rest of the CSV writing logic (with open(OUTPUT_FILE, ...), csv.DictWriter, writer.writeheader(), writer.writerows(rows)) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/resource_extra_fields_export.py`:
- Around line 161-165: The code that converts list-type fields into a
comma-separated string should coerce each element to a string before joining to
avoid TypeError for non-string items; update the block that checks
isinstance(value, list) (using the variables value, field, row) to build a list
of strings (e.g., [str(v) for v in value]) and then join that list, so
row[field] gets a safe string representation.
- Around line 145-152: Wrap the json.loads call that parses item.metadata in a
try/except to catch json.JSONDecodeError (and optionally ValueError), so
malformed JSON doesn't crash the export; on error set extra_fields = {} (and/or
metadata = {}) and log or warn about the bad payload referencing item.metadata
or item.id for debugging. Specifically update the block that assigns metadata =
json.loads(metadata_raw) / extra_fields = metadata.get("extra_fields", {}) to
handle parsing failures and fall back to an empty dict for extra_fields.
- Around line 212-220: The loop over "for cell in col" silently swallows all
exceptions via a bare except; change it to catch specific exceptions (e.g.,
TypeError, ValueError, AttributeError) when calling len(str(cell.value)) and
handle them explicitly—either skip the cell or record/log the error—so
max_length is still computed safely; update the block around the "for cell in
col" loop and the calculation of adjusted_width and
ws.column_dimensions[column_letter].width to use the specific exception types
and emit a warning (or use the module logger) when an unexpected cell value is
encountered.
- Around line 36-47: The example hardcodes sensitive configuration
(API_HOST_URL, API_KEY) and other settings (CATEGORY_ID, OUTPUT_FILE,
OUTPUT_XLSX); replace those constants with reads from environment variables
(e.g., os.environ.get or dotenv) and add explicit validation/fallbacks—require
API_KEY and API_HOST_URL to be set and raise a clear error if missing, and allow
CATEGORY_ID, OUTPUT_FILE, OUTPUT_XLSX to default from env vars if present;
update any top-level comments to instruct users to set env vars rather than
paste keys into the file.
---
Nitpick comments:
In `@examples/resource_extra_fields_export.py`:
- Around line 121-131: The script runs top-level logic (prints and calls
itemsApi.read_items with CATEGORY_ID) which prevents reuse and testing; move the
execution into a main function (e.g., def main(): containing the print("Starting
export..."), items = itemsApi.read_items(cat=CATEGORY_ID), and print(f"Items
found: {len(items)}")) and then add the standard guard if __name__ ==
"__main__": main() so the module can be imported without side effects.
- Around line 173-182: Replace the list concatenation used to build csv_columns
with Python's list unpacking (spread) to produce a cleaner expression: update
the csv_columns construction that currently uses ["Ressourcen ID", "Titel"] +
FIELD_ORDER to use ["Ressourcen ID", "Titel", *FIELD_ORDER]; keep the rest of
the CSV writing logic (with open(OUTPUT_FILE, ...), csv.DictWriter,
writer.writeheader(), writer.writerows(rows)) unchanged.
| # Metadata is stored as JSON string | ||
| metadata_raw = item.metadata | ||
|
|
||
| if metadata_raw: | ||
| metadata = json.loads(metadata_raw) | ||
| extra_fields = metadata.get("extra_fields", {}) | ||
| else: | ||
| extra_fields = {} |
There was a problem hiding this comment.
We must anticipate malformed data, Number One.
The json.loads call will raise a JSONDecodeError if the metadata contains invalid JSON. In space—and in data processing—we prepare for the unexpected.
🛡️ Proposed fix: Handle JSON parsing errors
# Metadata is stored as JSON string
metadata_raw = item.metadata
if metadata_raw:
- metadata = json.loads(metadata_raw)
- extra_fields = metadata.get("extra_fields", {})
+ try:
+ metadata = json.loads(metadata_raw)
+ extra_fields = metadata.get("extra_fields", {})
+ except json.JSONDecodeError:
+ print(f"Warning: Invalid metadata JSON for item {item.id}")
+ extra_fields = {}
else:
extra_fields = {}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/resource_extra_fields_export.py` around lines 145 - 152, Wrap the
json.loads call that parses item.metadata in a try/except to catch
json.JSONDecodeError (and optionally ValueError), so malformed JSON doesn't
crash the export; on error set extra_fields = {} (and/or metadata = {}) and log
or warn about the bad payload referencing item.metadata or item.id for
debugging. Specifically update the block that assigns metadata =
json.loads(metadata_raw) / extra_fields = metadata.get("extra_fields", {}) to
handle parsing failures and fall back to an empty dict for extra_fields.
| # Convert list-type fields into comma-separated string | ||
| if isinstance(value, list): | ||
| value = ", ".join(value) | ||
|
|
||
| row[field] = value |
There was a problem hiding this comment.
Ensure type safety before joining list values.
If value contains non-string elements, the join operation will fail. A minor adjustment ensures we navigate safely through any data anomalies.
🛡️ Proposed fix: Convert elements to strings
# Convert list-type fields into comma-separated string
if isinstance(value, list):
- value = ", ".join(value)
+ value = ", ".join(str(v) for v in value)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/resource_extra_fields_export.py` around lines 161 - 165, The code
that converts list-type fields into a comma-separated string should coerce each
element to a string before joining to avoid TypeError for non-string items;
update the block that checks isinstance(value, list) (using the variables value,
field, row) to build a list of strings (e.g., [str(v) for v in value]) and then
join that list, so row[field] gets a safe string representation.
| for cell in col: | ||
| try: | ||
| if cell.value: | ||
| max_length = max(max_length, len(str(cell.value))) | ||
| except: | ||
| pass | ||
|
|
||
| adjusted_width = max_length + 2 | ||
| ws.column_dimensions[column_letter].width = adjusted_width |
There was a problem hiding this comment.
A bare except clause is unworthy of Starfleet protocols.
This silently swallows all exceptions, making debugging nearly impossible when anomalies occur. We must be specific about what we catch, or at minimum, log the encounter.
🛡️ Proposed fix: Handle exceptions properly
for cell in col:
try:
if cell.value:
max_length = max(max_length, len(str(cell.value)))
- except:
- pass
+ except (TypeError, AttributeError):
+ continue📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for cell in col: | |
| try: | |
| if cell.value: | |
| max_length = max(max_length, len(str(cell.value))) | |
| except: | |
| pass | |
| adjusted_width = max_length + 2 | |
| ws.column_dimensions[column_letter].width = adjusted_width | |
| for cell in col: | |
| try: | |
| if cell.value: | |
| max_length = max(max_length, len(str(cell.value))) | |
| except (TypeError, AttributeError): | |
| continue | |
| adjusted_width = max_length + 2 | |
| ws.column_dimensions[column_letter].width = adjusted_width |
🧰 Tools
🪛 Ruff (0.15.2)
[error] 216-216: Do not use bare except
(E722)
[error] 216-217: try-except-pass detected, consider logging the exception
(S110)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/resource_extra_fields_export.py` around lines 212 - 220, The loop
over "for cell in col" silently swallows all exceptions via a bare except;
change it to catch specific exceptions (e.g., TypeError, ValueError,
AttributeError) when calling len(str(cell.value)) and handle them
explicitly—either skip the cell or record/log the error—so max_length is still
computed safely; update the block around the "for cell in col" loop and the
calculation of adjusted_width and ws.column_dimensions[column_letter].width to
use the specific exception types and emit a warning (or use the module logger)
when an unexpected cell value is encountered.
MoustaphaCamara
left a comment
There was a problem hiding this comment.
suggestion: add to .gitignore:
diff --git a/.gitignore b/.gitignore
index 0ada496..31fb00d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -3,3 +3,4 @@ openapi.yaml
html
venv
__pycache__
+export.*
diff --git a/examples/resource_extra_fields_export.py b/examples/resource_extra_fields_export.py
index c91031b..0f8c553 100644
--- a/examples/resource_extra_fields_export.py
+++ b/examples/resource_extra_fields_export.py
@@ -152,9 +152,12 @@ for item in items:
extra_fields = {}
# Extract all fields defined in FIELD_ORDER
+ normalized_fields = {k.strip().lower(): v for k, v in extra_fields.items()}
+
for field in FIELD_ORDER:
- if field in extra_fields:
- value = extra_fields[field].get("value", "")
+ key = field.strip().lower()
+ if key in normalized_fields:
+ value = normalized_fields[key].get("value", "")
else:
value = ""
|
Thanks in advance for this addition! We'll make sure to merge it as soon as it's corrected :) |
Enhance resource_extra_fields_export example script to support multiple API response formats and improve robustness. - Use `_preload_content=False` and manually decode JSON response. - Support metadata returned as either JSON string or object. - Normalize extra field keys for case/space-insensitive matching. - Add backward compatibility for different elabapi_python versions (dict-style and object-style access). - Use safe `.get()` access for item fields. - Ignore generated export files via `.gitignore` (export.*). This makes the example script more resilient across API versions and metadata formats.
|
I made a pull request in your fork so you can cherry pick my changes and keep what you need 👍 |
Co-authored-by: Mouss <104678778+MoustaphaCamara@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
examples/resource_extra_fields_export.py (2)
53-74: Number One, we appear to be reinventing the warp drive.The
examples/client.pymodule already provides a well-structured, reusable API client configuration that handles proxy settings, CA certificates, and debug flags. Duplicating this logic here creates maintenance burden and inconsistencies.Additionally, the API key is configured twice: via
configuration.api_key(lines 55-56) and viaset_default_header(lines 68-71). Theclient.pypattern uses onlyset_default_header, which is the authoritative method for this library.♻️ Proposed refactor: Reuse the existing client module
-import elabapi_python import json import csv +import os from openpyxl import Workbook from openpyxl.styles import Font from openpyxl.utils import get_column_letter +# Import the shared API client configuration +from client import api_client +import elabapi_python ######################### # CONFIG ######################### -# Base URL of your eLabFTW API (must include /api/v2) -API_HOST_URL = 'https://YOUR-URL/api/v2' - -# Personal API key generated in eLabFTW -API_KEY = 'YOUR API Key' - # Category ID from which items should be exported -CATEGORY_ID = 123456 +CATEGORY_ID = int(os.environ.get('ELAB_CATEGORY_ID', '123456')) # Output filenames -OUTPUT_FILE = "export.csv" -OUTPUT_XLSX = "export.xlsx" - -######################### -# API CONFIGURATION -######################### - -# Create configuration object for the eLabFTW API -configuration = elabapi_python.Configuration() -configuration.api_key['api_key'] = API_KEY -configuration.api_key_prefix['api_key'] = 'Authorization' -configuration.host = API_HOST_URL - -configuration.debug = False - -# Set to True if valid SSL certificates are used -configuration.verify_ssl = True - -# Create API client -api_client = elabapi_python.ApiClient(configuration) - -# Set authorization header manually (required for authentication) -api_client.set_default_header( - header_name='Authorization', - header_value=API_KEY -) +OUTPUT_FILE = os.environ.get('EXPORT_CSV', 'export.csv') +OUTPUT_XLSX = os.environ.get('EXPORT_XLSX', 'export.xlsx') # Load Items API endpoint itemsApi = elabapi_python.ItemsApi(api_client)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/resource_extra_fields_export.py` around lines 53 - 74, The code duplicates client setup and sets the API key twice; instead import and reuse the shared client builder from examples.client (e.g., the factory function that returns an ApiClient) rather than instantiating elabapi_python.Configuration/ApiClient directly, remove the configuration.api_key[...] assignment and keep only the authoritative set_default_header usage, and then instantiate ItemsApi with the reused ApiClient (look for symbols like Configuration, api_key, set_default_header, ApiClient, ItemsApi and the shared factory in examples.client) so proxy/CA/debug logic is centralized.
176-176: A minor course correction, as recommended by our static analysis sensors.The linter suggests using spread syntax for list concatenation, which is more idiomatic in modern Python.
♻️ Proposed fix
-csv_columns = ["Ressourcen ID", "Titel"] + FIELD_ORDER +csv_columns = ["Ressourcen ID", "Titel", *FIELD_ORDER]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/resource_extra_fields_export.py` at line 176, Replace the list concatenation that builds csv_columns using the + operator with Python's list unpacking: construct a new list literal containing the two initial strings ("Ressourcen ID" and "Titel") and expand FIELD_ORDER into it using the star-unpacking operator; update the assignment to csv_columns to use this pattern so csv_columns becomes a single list literal with the items from FIELD_ORDER unpacked.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/resource_extra_fields_export.py`:
- Around line 125-131: Wrap the script execution in a main guard (if __name__ ==
"__main__":) so importing this module won't run the export immediately, and add
defensive error handling around the itemsApi.read_items call: call
itemsApi.read_items(cat=CATEGORY_ID) inside a try/except that catches broad
request/API exceptions (and Exception as a fallback), logs/prints a clear
diagnostic including the exception message and returns/exit non‑zero on failure;
ensure variables referenced after the call (like items and len(items)) are only
used when the call succeeds. Reference: itemsApi.read_items and the top-level
script body where the print("Starting export...") and items =
itemsApi.read_items(...) occur.
---
Nitpick comments:
In `@examples/resource_extra_fields_export.py`:
- Around line 53-74: The code duplicates client setup and sets the API key
twice; instead import and reuse the shared client builder from examples.client
(e.g., the factory function that returns an ApiClient) rather than instantiating
elabapi_python.Configuration/ApiClient directly, remove the
configuration.api_key[...] assignment and keep only the authoritative
set_default_header usage, and then instantiate ItemsApi with the reused
ApiClient (look for symbols like Configuration, api_key, set_default_header,
ApiClient, ItemsApi and the shared factory in examples.client) so proxy/CA/debug
logic is centralized.
- Line 176: Replace the list concatenation that builds csv_columns using the +
operator with Python's list unpacking: construct a new list literal containing
the two initial strings ("Ressourcen ID" and "Titel") and expand FIELD_ORDER
into it using the star-unpacking operator; update the assignment to csv_columns
to use this pattern so csv_columns becomes a single list literal with the items
from FIELD_ORDER unpacked.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: db666e3f-4259-40b2-ae9a-9ef784de1937
📒 Files selected for processing (1)
examples/resource_extra_fields_export.py
| print("Starting export...") | ||
|
|
||
| # Retrieve all items from the specified category | ||
| items = itemsApi.read_items(cat=CATEGORY_ID) | ||
|
|
||
| print(f"Items found: {len(items)}") | ||
|
|
There was a problem hiding this comment.
Engage defensive protocols: the API call lacks error handling.
The read_items call on line 128 can fail due to network issues, authentication errors, or invalid category IDs. Currently, any exception would crash the script without a clear diagnostic message. Additionally, this script lacks a if __name__ == "__main__": guard, meaning it executes immediately upon import—problematic if someone attempts to reuse functions from this module.
🛡️ Proposed fix: Add main guard and API error handling
+def main():
+ print("Starting export...")
-print("Starting export...")
+ # Retrieve all items from the specified category
+ try:
+ items = itemsApi.read_items(cat=CATEGORY_ID)
+ except elabapi_python.ApiException as e:
+ print(f"API request failed: {e}")
+ return
-# Retrieve all items from the specified category
-items = itemsApi.read_items(cat=CATEGORY_ID)
+ print(f"Items found: {len(items)}")
+ # ... rest of logic ...
-print(f"Items found: {len(items)}")
+if __name__ == "__main__":
+ main()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/resource_extra_fields_export.py` around lines 125 - 131, Wrap the
script execution in a main guard (if __name__ == "__main__":) so importing this
module won't run the export immediately, and add defensive error handling around
the itemsApi.read_items call: call itemsApi.read_items(cat=CATEGORY_ID) inside a
try/except that catches broad request/API exceptions (and Exception as a
fallback), logs/prints a clear diagnostic including the exception message and
returns/exit non‑zero on failure; ensure variables referenced after the call
(like items and len(items)) are only used when the call succeeds. Reference:
itemsApi.read_items and the top-level script body where the print("Starting
export...") and items = itemsApi.read_items(...) occur.
feat: examples: improve extra fields export compatibility and parsing
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/resource_extra_fields_export.py`:
- Around line 149-155: The dict-backed metadata branch currently uses
getattr(metadata, "extra_fields", {}) which discards extra fields when
metadata_raw is a dict; change the branching so that if metadata_raw is a dict
you set metadata = metadata_raw and extract extra_fields =
metadata.get("extra_fields", {}) (i.e., check isinstance(metadata_raw, dict) or
use an elif for dict before the getattr fallback), otherwise fall back to
attribute access via getattr(metadata, "extra_fields", {}) or {} to preserve
compatibility with object-backed metadata.
- Around line 127-129: The current call to
ItemsApi.read_items(category=CATEGORY_ID) only fetches the first page (default
page size 15), so change it to either paginate or explicitly set a larger limit:
implement a loop that calls ItemsApi.read_items(cat=CATEGORY_ID, limit=...,
offset=offset) decoding response.data and appending to items until an empty page
is returned, or pass a documented limit (e.g., limit=9999) to
ItemsApi.read_items to guarantee full retrieval; update the variables response,
items, and the pagination logic around ItemsApi.read_items to reflect the chosen
approach.
- Around line 53-71: Remove the nonstandard config-based auth and the duplicate
header: delete the two lines that set configuration.api_key and
configuration.api_key_prefix (the uses of configuration.api_key and
configuration.api_key_prefix after creating Configuration()), keep the
Configuration() setup for host and verify_ssl, instantiate
ApiClient(configuration) as before, and rely on a single call to
api_client.set_default_header(header_name='Authorization', header_value=API_KEY)
to set auth; this consolidates authentication to the standard pattern used in
examples.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7af3c110-37ac-48ea-afc3-c8d706b2b85a
📒 Files selected for processing (2)
.gitignoreexamples/resource_extra_fields_export.py
✅ Files skipped from review due to trivial changes (1)
- .gitignore
| # Create configuration object for the eLabFTW API | ||
| configuration = elabapi_python.Configuration() | ||
| configuration.api_key['api_key'] = API_KEY | ||
| configuration.api_key_prefix['api_key'] = 'Authorization' | ||
| configuration.host = API_HOST_URL | ||
|
|
||
| configuration.debug = False | ||
|
|
||
| # Set to True if valid SSL certificates are used | ||
| configuration.verify_ssl = True | ||
|
|
||
| # Create API client | ||
| api_client = elabapi_python.ApiClient(configuration) | ||
|
|
||
| # Set authorization header manually (required for authentication) | ||
| api_client.set_default_header( | ||
| header_name='Authorization', | ||
| header_value=API_KEY | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
cfg="$(fd '^configuration\.py$' | head -n1)"
example_cfg="examples/client.py"
test -n "$cfg"
echo "Inspecting generated auth semantics in: $cfg"
sed -n '/def get_api_key_with_prefix/,/def /p' "$cfg"
echo
echo "Inspecting example auth setup in: $example_cfg"
rg -n -C2 'set_default_header|api_key_prefix|api_key\[' "$example_cfg"Repository: elabftw/elabapi-python
Length of output: 48
🏁 Script executed:
# Find key files and understand repository structure
fd -t f -e py '(client\.py|configuration\.py|resource_extra)' | head -20Repository: elabftw/elabapi-python
Length of output: 125
🏁 Script executed:
# List top-level structure
ls -laRepository: elabftw/elabapi-python
Length of output: 763
🏁 Script executed:
# Search for elabapi_python module location and Configuration class
rg -l "class Configuration" --max-count=5Repository: elabftw/elabapi-python
Length of output: 48
🏁 Script executed:
# Examine both example files to understand auth setup
echo "=== examples/client.py ==="
cat -n examples/client.py
echo -e "\n=== examples/resource_extra_fields_export.py (first 80 lines) ==="
head -80 examples/resource_extra_fields_export.py | cat -nRepository: elabftw/elabapi-python
Length of output: 5742
🏁 Script executed:
# Check imports and find elabapi_python
rg "^import|^from" examples/ | head -20Repository: elabftw/elabapi-python
Length of output: 1135
🏁 Script executed:
# Search for other uses of api_key_prefix in the codebase to understand the pattern
rg "api_key_prefix" --max-count=10Repository: elabftw/elabapi-python
Length of output: 164
🏁 Script executed:
# Look at how other examples handle authentication
rg -l "Configuration\|ApiClient" examples/*.py | head -10Repository: elabftw/elabapi-python
Length of output: 48
🏁 Script executed:
# Check if there are any other authentication patterns in examples
fd -t f -e py examples/ | xargs rg -l "set_default_header|api_key" | head -10Repository: elabftw/elabapi-python
Length of output: 494
🏁 Script executed:
# Look for OpenAPI generated client documentation or patterns
fd -t f -e py -e md | xargs rg -l "get_api_key_with_prefix|prepend_prefix" 2>/dev/null | head -10Repository: elabftw/elabapi-python
Length of output: 48
🏁 Script executed:
# Check all examples for their authentication patterns
echo "=== Checking all example files for auth setup ==="
for file in examples/*.py; do
echo -e "\n--- $file ---"
rg -n "Configuration\|set_default_header|api_key\[|api_key_prefix" "$file" | head -15
doneRepository: elabftw/elabapi-python
Length of output: 1334
🏁 Script executed:
# Let's look at the full context of both authentication setups in resource_extra_fields_export.py
echo "=== Full resource_extra_fields_export.py auth section ==="
sed -n '50,75p' examples/resource_extra_fields_export.py | cat -nRepository: elabftw/elabapi-python
Length of output: 1008
🏁 Script executed:
# Check README or any documentation about authentication
cat README.md | head -100Repository: elabftw/elabapi-python
Length of output: 3790
🏁 Script executed:
# Search for requirements or setup.py to understand elabapi_python version/source
fd -t f '(requirements|setup|pyproject)' | xargs cat 2>/dev/null | head -50Repository: elabftw/elabapi-python
Length of output: 48
🏁 Script executed:
# Let's check if there's any documentation in comments about how api_key works
rg -i "api.?key|authorization" --type md | head -20Repository: elabftw/elabapi-python
Length of output: 212
🏁 Script executed:
# Let's check all examples to see if any others use api_key/api_key_prefix pattern
rg -c "api_key\[|api_key_prefix" examples/*.pyRepository: elabftw/elabapi-python
Length of output: 108
🏁 Script executed:
# Check if examples/00-getting-started.py (first example) shows auth setup
sed -n '1,80p' examples/00-getting-started.py | cat -nRepository: elabftw/elabapi-python
Length of output: 4970
Consolidate authentication configuration to the standard pattern.
Lines 55–56 employ configuration.api_key and configuration.api_key_prefix, which deviate from the established practice shown in the README and examples/client.py. Moreover, lines 68–71 redundantly set the same Authorization header, creating unnecessary duplication. Remove the configuration-based approach and rely solely on api_client.set_default_header(), as demonstrated in the standard examples/client.py (lines 55–58).
Suggested change
# Create configuration object for the eLabFTW API
configuration = elabapi_python.Configuration()
-configuration.api_key['api_key'] = API_KEY
-configuration.api_key_prefix['api_key'] = 'Authorization'
configuration.host = API_HOST_URL📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Create configuration object for the eLabFTW API | |
| configuration = elabapi_python.Configuration() | |
| configuration.api_key['api_key'] = API_KEY | |
| configuration.api_key_prefix['api_key'] = 'Authorization' | |
| configuration.host = API_HOST_URL | |
| configuration.debug = False | |
| # Set to True if valid SSL certificates are used | |
| configuration.verify_ssl = True | |
| # Create API client | |
| api_client = elabapi_python.ApiClient(configuration) | |
| # Set authorization header manually (required for authentication) | |
| api_client.set_default_header( | |
| header_name='Authorization', | |
| header_value=API_KEY | |
| ) | |
| # Create configuration object for the eLabFTW API | |
| configuration = elabapi_python.Configuration() | |
| configuration.host = API_HOST_URL | |
| configuration.debug = False | |
| # Set to True if valid SSL certificates are used | |
| configuration.verify_ssl = True | |
| # Create API client | |
| api_client = elabapi_python.ApiClient(configuration) | |
| # Set authorization header manually (required for authentication) | |
| api_client.set_default_header( | |
| header_name='Authorization', | |
| header_value=API_KEY | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/resource_extra_fields_export.py` around lines 53 - 71, Remove the
nonstandard config-based auth and the duplicate header: delete the two lines
that set configuration.api_key and configuration.api_key_prefix (the uses of
configuration.api_key and configuration.api_key_prefix after creating
Configuration()), keep the Configuration() setup for host and verify_ssl,
instantiate ApiClient(configuration) as before, and rely on a single call to
api_client.set_default_header(header_name='Authorization', header_value=API_KEY)
to set auth; this consolidates authentication to the standard pattern used in
examples.
| # Retrieve all items from the specified category | ||
| response = itemsApi.read_items(cat=CATEGORY_ID, _preload_content=False) | ||
| items = json.loads(response.data.decode("utf-8")) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For eLabFTW / elabapi_python ItemsApi.read_items, what pagination parameters and default page size apply when listing items by category? Does one call without limitoroffset return all items or only the first page?
💡 Result:
For eLabFTW / elabapi_python ItemsApi.read_items, the pagination parameters are limit and offset, which are query parameters supported by the underlying REST API endpoint /api/v2/items. The default page size (limit) is 15. The cat parameter filters items by category ID (supports comma-separated lists including "null"). A call without limit or offset uses the defaults (limit=15, offset=0), returning only the first page, not all items.
Citations:
- 1: https://raw.githubusercontent.com/elabftw/elabftw/refs/tags/5.4.0/apidoc/v2/openapi.yaml
- 2: https://github.com/elabftw/elabapi-python/blob/master/examples/12-experiments-export-pdf.py
Implement pagination to retrieve all items, or document the intended limit explicitly.
This implementation retrieves only the first page of results. The ItemsApi.read_items() method operates with a default page size of 15 items when no limit parameter is specified. When a category exceeds this batch size, the export will terminate prematurely without notification to the operator.
To ensure complete data retrieval, you have two viable approaches: either implement pagination by iterating with offset parameters until all items are exhausted (as you have precedent in examples/002-extra-field-to-column.py with limit=9999), or explicitly set and document the limit parameter to match your intended scope. The current code leaves the behavior ambiguous and risks silent data loss.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/resource_extra_fields_export.py` around lines 127 - 129, The current
call to ItemsApi.read_items(category=CATEGORY_ID) only fetches the first page
(default page size 15), so change it to either paginate or explicitly set a
larger limit: implement a loop that calls ItemsApi.read_items(cat=CATEGORY_ID,
limit=..., offset=offset) decoding response.data and appending to items until an
empty page is returned, or pass a documented limit (e.g., limit=9999) to
ItemsApi.read_items to guarantee full retrieval; update the variables response,
items, and the pagination logic around ItemsApi.read_items to reflect the chosen
approach.
| if metadata_raw: | ||
| if isinstance(metadata_raw, str): | ||
| metadata = json.loads(metadata_raw) | ||
| extra_fields = metadata.get("extra_fields", {}) | ||
| else: | ||
| metadata = metadata_raw | ||
| extra_fields = getattr(metadata, "extra_fields", {}) or {} |
There was a problem hiding this comment.
The dict-backed metadata branch loses every extra field.
After Line 129, item is a Python dict. If metadata_raw is already a dict, getattr(metadata_raw, "extra_fields", {}) always falls back to {}, so the export appears successful while silently blanking those columns. Handle dicts with .get("extra_fields", {}) before falling back to attribute access.
🖖 Suggested change
if metadata_raw:
if isinstance(metadata_raw, str):
metadata = json.loads(metadata_raw)
extra_fields = metadata.get("extra_fields", {})
+ elif isinstance(metadata_raw, dict):
+ extra_fields = metadata_raw.get("extra_fields", {}) or {}
else:
- metadata = metadata_raw
- extra_fields = getattr(metadata, "extra_fields", {}) or {}
+ extra_fields = getattr(metadata_raw, "extra_fields", {}) or {}
else:
extra_fields = {}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/resource_extra_fields_export.py` around lines 149 - 155, The
dict-backed metadata branch currently uses getattr(metadata, "extra_fields", {})
which discards extra fields when metadata_raw is a dict; change the branching so
that if metadata_raw is a dict you set metadata = metadata_raw and extract
extra_fields = metadata.get("extra_fields", {}) (i.e., check
isinstance(metadata_raw, dict) or use an elif for dict before the getattr
fallback), otherwise fall back to attribute access via getattr(metadata,
"extra_fields", {}) or {} to preserve compatibility with object-backed metadata.
Python script to export extra fields of items in a specific resource category.
Summary by CodeRabbit