Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughThis PR expands the F1 data science pipeline by introducing new lap-time prediction scripts for multiple race venues (Las Vegas, Qatar, Abu Dhabi) with machine learning models and SHAP explainability, while refactoring the FastAPI service by consolidating main1.py into main.py and updating input schemas and endpoint configurations. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR appears to restructure the inference API entrypoint while adding/updating several model-training scripts and committing a new Keras model artifact.
Changes:
- Remove
main1.pyand updatemain.pyFastAPI configuration (docs routing, formatting, and endpoints). - Add new training scripts for additional races (
8*.py,10*.py) and adjust existing training scripts/model output naming (12*.py,17.py). - Add a committed model artifact (
abu_dhabi_ffnmodel.keras) and adjust.gitignorepatterns.
Reviewed changes
Copilot reviewed 9 out of 14 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| main1.py | Removes the previous FastAPI app module used by tests/clients. |
| main.py | Updates FastAPI app initialization, lifespan model loading, and endpoint routing. |
| abu_dhabi_ffnmodel.keras | Adds a serialized Keras model artifact to the repo. |
| 8.py | New Las Vegas training script exporting a sklearn model via joblib. |
| 8-3.py | New Las Vegas script variant attempting to export to JSON. |
| 10.py | New Qatar training script exporting a sklearn model via joblib. |
| 10-3.py | New Qatar script variant attempting to export to JSON. |
| 17.py | Updates Keras model save filename for Abu Dhabi FFN model training. |
| 12.py | Adjusts Abu Dhabi ensemble training data and output artifact name. |
| 12-3.py | Reformats/updates Abu Dhabi XGBoost training and changes JSON output filename. |
| .gitignore | Removes multiple ignore patterns (including model artifacts). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| model.get_booster().save_model("lasvegasmodel.json") | ||
| print("lasvegasmodel.json saved successfully") |
There was a problem hiding this comment.
This script calls model.get_booster() but model is a GradientBoostingRegressor, which doesn’t have get_booster(). This will raise AttributeError at runtime. Either switch to an XGBoost model (e.g., XGBRegressor) if you need to export a .json booster, or keep the sklearn model export via joblib.dump(...).
| model.get_booster().save_model("lasvegasmodel.json") | |
| print("lasvegasmodel.json saved successfully") | |
| joblib.dump(model, "lasvegasmodel.joblib") | |
| print("lasvegasmodel.joblib saved successfully") |
| model.get_booster().save_model("abudhabimodel.json") | ||
| print("abudhabimodel.json saved successfully") No newline at end of file |
There was a problem hiding this comment.
12-3.py now saves the XGBoost model to abudhabimodel.json, but the FastAPI service (main.py) only attempts to load abu_dhabi_model.json. Unless something else renames/copies the file, this will leave the service using an outdated model or failing to load the newly trained one. Align the filename between training and inference (or make the inference path configurable).
| model.get_booster().save_model("abudhabimodel.json") | |
| print("abudhabimodel.json saved successfully") | |
| model.get_booster().save_model("abu_dhabi_model.json") | |
| print("abu_dhabi_model.json saved successfully") |
|
|
||
|
|
||
| model.save("abu_dhabi_ffnmodel.keras") | ||
| model.save("abudhabiffnmodel.keras") |
There was a problem hiding this comment.
The model is saved as abudhabiffnmodel.keras, but this PR adds a model artifact named abu_dhabi_ffnmodel.keras (with underscores). This mismatch makes it easy to commit/load the wrong file. Use a single canonical filename (and update any references accordingly).
| model.save("abudhabiffnmodel.keras") | |
| model.save("abu_dhabi_ffnmodel.keras") |
| weather = requests.get( | ||
| "http://api.openweathermap.org/data/2.5/forecast" | ||
| f"?lat={LAT}&lon={LON}&appid={OPENWEATHER_API}&units=metric", |
There was a problem hiding this comment.
The OpenWeather request uses http:// and includes the API key in the query string. Using plain HTTP can expose the key and response data to interception/mitm. Switch to https://api.openweathermap.org/... (and ideally avoid logging the full URL).
| weather = requests.get( | |
| "http://api.openweathermap.org/data/2.5/forecast" | |
| f"?lat={LAT}&lon={LON}&appid={OPENWEATHER_API}&units=metric", | |
| params = { | |
| "lat": LAT, | |
| "lon": LON, | |
| "appid": OPENWEATHER_API, | |
| "units": "metric", | |
| } | |
| weather = requests.get( | |
| "https://api.openweathermap.org/data/2.5/forecast", | |
| params=params, |
| ) | ||
|
|
||
| weather = requests.get( | ||
| "http://api.openweathermap.org/data/2.5/forecast" |
There was a problem hiding this comment.
The OpenWeather request uses http:// and includes the API key in the query string. Using plain HTTP can expose the key and response data to interception/mitm. Switch to https://api.openweathermap.org/... (and ideally avoid logging the full URL).
| "http://api.openweathermap.org/data/2.5/forecast" | |
| "https://api.openweathermap.org/data/2.5/forecast" |
| app = FastAPI( | ||
| title="F1 Race Pace Predictor", | ||
| description="API for predicting F1 race pace based on qualifying and weather data", | ||
| version="1.0.0", | ||
| lifespan=lifespan, | ||
| docs_url="/", | ||
| redoc_url=None | ||
| ) | ||
|
|
||
| class PredictionInput(BaseModel): | ||
| driver_code: str | ||
| qualifying_time: float | ||
| clean_air_race_pace: float | ||
| driver_code: ver | ||
| qualifying_time: 90.90 | ||
| clean_air_race_pace: 89.01 | ||
| rain_prob: float = 0.0 | ||
| temperature: float = 25.0 | ||
|
|
||
| @app.post("/predict") | ||
| async def predict(input_data: PredictionInput): | ||
| start_time = time.time() | ||
| model = ml_models.get("f1_model") | ||
|
|
||
| if model is None: | ||
| raise HTTPException(status_code=500, detail="Model not loaded") | ||
|
|
||
| drivers = lookup_data.get("data", {}).get("drivers", {}) | ||
| team_score = drivers.get(input_data.driver_code.upper(), 0.5) | ||
|
|
||
| features = np.array([[input_data.qualifying_time,input_data.rain_prob,input_data.temperature,team_score,input_data.clean_air_race_pace]]) | ||
|
|
||
| try: | ||
| dmatrix = xgb.DMatrix(features) | ||
| prediction = model.predict(dmatrix)[0] | ||
| latency = time.time() - start_time | ||
| return {"driver": input_data.driver_code.upper(),"predicted_pace": float(prediction),"meta": {"latency": f"{latency:.4f}s","model": "abu_dhabi_xgb_v1"}} | ||
| return { | ||
| "driver": input_data.driver_code.upper(), | ||
| "predicted_pace": float(prediction), | ||
| "meta": { | ||
| "latency": f"{latency:.4f}s", | ||
| "model": "abu_dhabi_xgb_v1" | ||
| } | ||
| } | ||
| except Exception as e: | ||
| raise HTTPException(status_code=500, detail=str(e)) | ||
|
|
||
| @app.get("/") | ||
| async def root(): | ||
| return {"message": "F1 Race Pace Predictor API","version": "1.0.0"} | ||
| @app.get("/info", include_in_schema=False) | ||
| async def info(): | ||
| return { | ||
| "message": "F1 Race Pace Predictor API", | ||
| "version": "1.0.0" | ||
| } |
There was a problem hiding this comment.
The app now serves Swagger UI at / (docs_url="/") and the JSON root endpoint was renamed to /info. This is a breaking change for existing consumers and the current test suite still imports main1 and calls GET / expecting JSON (see test_main.py). Either restore a JSON handler at / (and move docs back to /docs), or update tests/clients and keep a compatibility redirect/alias endpoint.
| model.get_booster().save_model("qatarmodel.json") | ||
| print("qatarmodel.json saved successfully") No newline at end of file |
There was a problem hiding this comment.
This script calls model.get_booster() but model is a GradientBoostingRegressor, which doesn’t have get_booster(). This will raise AttributeError at runtime. Either switch to an XGBoost model (e.g., XGBRegressor) if you need to export a .json booster, or keep the sklearn model export via joblib.dump(...).
| model.get_booster().save_model("qatarmodel.json") | |
| print("qatarmodel.json saved successfully") | |
| joblib.dump(model, "qatarmodel.joblib") | |
| print("qatarmodel.joblib saved successfully") |
| ) | ||
|
|
||
| weather = requests.get( | ||
| "http://api.openweathermap.org/data/2.5/forecast" |
There was a problem hiding this comment.
The OpenWeather request uses http:// and includes the API key in the query string. Using plain HTTP can expose the key and response data to interception/mitm. Switch to https://api.openweathermap.org/... (and ideally avoid logging the full URL).
| "http://api.openweathermap.org/data/2.5/forecast" | |
| "https://api.openweathermap.org/data/2.5/forecast" |
| ) | ||
|
|
||
| weather = requests.get( | ||
| "http://api.openweathermap.org/data/2.5/forecast" |
There was a problem hiding this comment.
The OpenWeather request uses http:// and includes the API key in the query string. Using plain HTTP can expose the key and response data to interception/mitm. Switch to https://api.openweathermap.org/... (and ideally avoid logging the full URL).
| "http://api.openweathermap.org/data/2.5/forecast" | |
| "https://api.openweathermap.org/data/2.5/forecast" |
| try: | ||
| with open("abu_dhabi_model.json", "rb") as f: | ||
| booster = xgb.Booster() | ||
| booster.load_model(f) | ||
| ml_models["f1_model"] = booster | ||
| print("Model loaded as binary.") | ||
| except Exception as e2: | ||
| print(f"Error loading model as binary: {e2}") |
There was a problem hiding this comment.
In lifespan, the fallback binary load runs even when the first booster.load_model("abu_dhabi_model.json") succeeds because the second try: is outside the except block. This causes redundant work and can emit a misleading "Error loading model as binary" even though the model is already loaded. Move the binary-load attempt inside the except, or guard it (e.g., only run if "f1_model" is not set).
| try: | |
| with open("abu_dhabi_model.json", "rb") as f: | |
| booster = xgb.Booster() | |
| booster.load_model(f) | |
| ml_models["f1_model"] = booster | |
| print("Model loaded as binary.") | |
| except Exception as e2: | |
| print(f"Error loading model as binary: {e2}") | |
| if "f1_model" not in ml_models: | |
| try: | |
| with open("abu_dhabi_model.json", "rb") as f: | |
| booster = xgb.Booster() | |
| booster.load_model(f) | |
| ml_models["f1_model"] = booster | |
| print("Model loaded as binary.") | |
| except Exception as e2: | |
| print(f"Error loading model as binary: {e2}") |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
main.py (1)
15-30:⚠️ Potential issue | 🟠 MajorLoader still targets
abu_dhabi_model.jsonand reloads even after success.The trainer now saves
abudhabimodel.json(12-3.py), so the loader will miss the model. Also, the binary load runs even if the first load succeeds.✅ Suggested fix (single path + true fallback)
- if os.path.exists("abu_dhabi_model.json"): + MODEL_PATH = "abudhabimodel.json" + if os.path.exists(MODEL_PATH): try: booster = xgb.Booster() - booster.load_model("abu_dhabi_model.json") + booster.load_model(MODEL_PATH) ml_models["f1_model"] = booster print("Model loaded.") except Exception as e: print(f"Error loading model: {e}") - try: - with open("abu_dhabi_model.json", "rb") as f: - booster = xgb.Booster() - booster.load_model(f) - ml_models["f1_model"] = booster - print("Model loaded as binary.") - except Exception as e2: - print(f"Error loading model as binary: {e2}") + try: + with open(MODEL_PATH, "rb") as f: + booster = xgb.Booster() + booster.load_model(f) + ml_models["f1_model"] = booster + print("Model loaded as binary.") + except Exception as e2: + print(f"Error loading model as binary: {e2}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main.py` around lines 15 - 30, The loader currently checks for "abu_dhabi_model.json" and attempts two loads unconditionally; change it to look for the trainer's filename "abudhabimodel.json" and perform a single primary load (xgb.Booster().load_model("abudhabimodel.json")) inside a try/except, and only if that fails perform a fallback binary attempt by opening the same file in "rb" and calling load_model on the file object; on any successful load assign ml_models["f1_model"] once and avoid re-running the second load when the first succeeded (refer to symbols booster, xgb.Booster, load_model, and ml_models["f1_model"] to locate and update the logic).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@10-3.py`:
- Line 207: The console output string printed by the print call that contains
"Predicted Qatar 2025 Race Pace – Top 5" uses an en dash (Unicode) which Ruff
flags; edit the print statement (the print(...) invocation) to replace the en
dash with a standard ASCII hyphen ("-") so the string becomes "Predicted Qatar
2025 Race Pace - Top 5"; keep the rest of the text exactly the same.
- Around line 238-239: The code calls model.get_booster() which doesn't exist on
sklearn.ensemble.GradientBoostingRegressor and will raise AttributeError;
replace the save logic to serialize the sklearn model with joblib (e.g., use
joblib.dump(model, "qatarmodel.pkl") instead of
model.get_booster().save_model(...)) and update the success print to reference
the new file name; locate the save logic around the model variable where
get_booster() is invoked and remove that call, importing joblib if not already
imported.
In `@10.py`:
- Line 207: The console output string uses an EN DASH character in the print
statement "Predicted Qatar 2025 Race Pace – Top 5"; replace the en dash (U+2013)
with a standard ASCII hyphen-minus "-" so the line becomes print("\nPredicted
Qatar 2025 Race Pace - Top 5") to satisfy Ruff's linting rule.
In `@12-3.py`:
- Around line 220-221: The saved model filename used in
model.get_booster().save_model(...) and the subsequent print call is
inconsistent with the rest of the codebase; change the filename string from
"abudhabimodel.json" to "abu_dhabi_model.json" in the save_model call and update
the print message accordingly so main.py, test_main.py, and uncertainty.py can
load the model.
In `@8-3.py`:
- Line 236: The console output string in the print call containing "Predicted
Las Vegas 2024 Race Pace – Top 5" uses a Unicode EN DASH which Ruff flags; edit
the print statement (the print(...) expression) to replace the EN DASH with a
plain ASCII hyphen/minus (e.g., "Predicted Las Vegas 2024 Race Pace - Top 5") so
the string contains only ASCII characters.
- Around line 267-268: The code calls model.get_booster() which only exists on
XGBoost/LightGBM boosters and will raise AttributeError when model is a
scikit-learn GradientBoostingRegressor; replace the booster save with
scikit-learn serialization by using joblib.dump(model, "lasvegasmodel.joblib")
(or pickle) to persist the trained GradientBoostingRegressor instance and update
the success message accordingly; ensure you import joblib and change the
filename/print to reflect the new format.
In `@8.py`:
- Line 236: The console output string in the top-level print call
print("\nPredicted Las Vegas 2024 Race Pace – Top 5") contains a Unicode en
dash; replace it with an ASCII hyphen-minus (e.g. "Predicted Las Vegas 2024 Race
Pace - Top 5") to satisfy Ruff and avoid non-ASCII punctuation in the print
statement.
In `@main.py`:
- Around line 41-48: The tests are importing a removed module name; update the
import in test_main.py from the stale module name main1 to the consolidated
module main so the test imports the actual FastAPI app and data objects;
specifically change the import statement that currently reads "from main1 import
app, ml_models, lookup_data" to import app, ml_models, and lookup_data from main
(ensure the symbols app, ml_models, lookup_data referenced in main.py are
exported/available).
- Around line 50-55: The PredictionInput Pydantic model has invalid annotations:
change driver_code: ver to a proper type (e.g., driver_code: str) and replace
the numeric type annotations qualifying_time: 90.90 and clean_air_race_pace:
89.01 with the float type and move the numeric values to defaults (e.g.,
qualifying_time: float = 90.90, clean_air_race_pace: float = 89.01); keep
rain_prob: float = 0.0 and temperature: float = 25.0 as-is and ensure the class
still inherits from BaseModel (PredictionInput).
---
Outside diff comments:
In `@main.py`:
- Around line 15-30: The loader currently checks for "abu_dhabi_model.json" and
attempts two loads unconditionally; change it to look for the trainer's filename
"abudhabimodel.json" and perform a single primary load
(xgb.Booster().load_model("abudhabimodel.json")) inside a try/except, and only
if that fails perform a fallback binary attempt by opening the same file in "rb"
and calling load_model on the file object; on any successful load assign
ml_models["f1_model"] once and avoid re-running the second load when the first
succeeded (refer to symbols booster, xgb.Booster, load_model, and
ml_models["f1_model"] to locate and update the logic).
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
.gitignore10-3.py10.py12-3.py12.py17.py2024 report .pbix2025 report.pbix8-3.py8.pyabu_dhabi_ensemble_model.joblibabu_dhabi_ffnmodel.kerasmain.pymain1.py
💤 Files with no reviewable changes (2)
- .gitignore
- main1.py
| ) | ||
| top5.index = range(1, 6) | ||
|
|
||
| print("\nPredicted Qatar 2025 Race Pace – Top 5") |
There was a problem hiding this comment.
Replace EN DASH in console output.
Ruff flags the Unicode dash; prefer a standard hyphen to avoid encoding surprises.
🩹 Suggested fix
-print("\nPredicted Qatar 2025 Race Pace – Top 5")
+print("\nPredicted Qatar 2025 Race Pace - Top 5")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| print("\nPredicted Qatar 2025 Race Pace – Top 5") | |
| print("\nPredicted Qatar 2025 Race Pace - Top 5") |
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 207-207: String contains ambiguous – (EN DASH). Did you mean - (HYPHEN-MINUS)?
(RUF001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@10-3.py` at line 207, The console output string printed by the print call
that contains "Predicted Qatar 2025 Race Pace – Top 5" uses an en dash (Unicode)
which Ruff flags; edit the print statement (the print(...) invocation) to
replace the en dash with a standard ASCII hyphen ("-") so the string becomes
"Predicted Qatar 2025 Race Pace - Top 5"; keep the rest of the text exactly the
same.
| model.get_booster().save_model("qatarmodel.json") | ||
| print("qatarmodel.json saved successfully") No newline at end of file |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the file to understand the model type and context
head -n 250 10-3.py | tail -n 50Repository: MallamTeja/f1_models
Length of output: 1133
🏁 Script executed:
# Search for model instantiation and imports
rg -n "GradientBoostingRegressor|XGBRegressor|get_booster|import.*xgb|from sklearn" 10-3.py | head -30Repository: MallamTeja/f1_models
Length of output: 365
🏁 Script executed:
# Get the exact lines 238-239 with surrounding context
sed -n '230,250p' 10-3.pyRepository: MallamTeja/f1_models
Length of output: 246
sklearn.ensemble.GradientBoostingRegressor doesn't have get_booster()—this will raise AttributeError at runtime.
Use joblib.dump() to serialize sklearn models.
Suggested fix
-model.get_booster().save_model("qatarmodel.json")
-print("qatarmodel.json saved successfully")
+joblib.dump(model, "qatarmodel.joblib")
+print("qatarmodel.joblib saved successfully")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@10-3.py` around lines 238 - 239, The code calls model.get_booster() which
doesn't exist on sklearn.ensemble.GradientBoostingRegressor and will raise
AttributeError; replace the save logic to serialize the sklearn model with
joblib (e.g., use joblib.dump(model, "qatarmodel.pkl") instead of
model.get_booster().save_model(...)) and update the success print to reference
the new file name; locate the save logic around the model variable where
get_booster() is invoked and remove that call, importing joblib if not already
imported.
| ) | ||
| top5.index = range(1, 6) | ||
|
|
||
| print("\nPredicted Qatar 2025 Race Pace – Top 5") |
There was a problem hiding this comment.
Replace EN DASH in console output.
Ruff flags the Unicode dash; prefer a standard hyphen.
🩹 Suggested fix
-print("\nPredicted Qatar 2025 Race Pace – Top 5")
+print("\nPredicted Qatar 2025 Race Pace - Top 5")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| print("\nPredicted Qatar 2025 Race Pace – Top 5") | |
| print("\nPredicted Qatar 2025 Race Pace - Top 5") |
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 207-207: String contains ambiguous – (EN DASH). Did you mean - (HYPHEN-MINUS)?
(RUF001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@10.py` at line 207, The console output string uses an EN DASH character in
the print statement "Predicted Qatar 2025 Race Pace – Top 5"; replace the en
dash (U+2013) with a standard ASCII hyphen-minus "-" so the line becomes
print("\nPredicted Qatar 2025 Race Pace - Top 5") to satisfy Ruff's linting
rule.
| model.get_booster().save_model("abudhabimodel.json") | ||
| print("abudhabimodel.json saved successfully") No newline at end of file |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all references to the old/new Abu Dhabi model filenames.
rg -n "abu_dhabi_model\.json|abudhabimodel\.json" -C2Repository: MallamTeja/f1_models
Length of output: 1355
Align saved model filename with the API loader.
The code saves the model as abudhabimodel.json (line 220), but main.py, test_main.py, and uncertainty.py all load from abu_dhabi_model.json. Rename the saved file to abu_dhabi_model.json to match the expected filename.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@12-3.py` around lines 220 - 221, The saved model filename used in
model.get_booster().save_model(...) and the subsequent print call is
inconsistent with the rest of the codebase; change the filename string from
"abudhabimodel.json" to "abu_dhabi_model.json" in the save_model call and update
the print message accordingly so main.py, test_main.py, and uncertainty.py can
load the model.
| ) | ||
| top5.index = range(1, 6) | ||
|
|
||
| print("\nPredicted Las Vegas 2024 Race Pace – Top 5") |
There was a problem hiding this comment.
Replace EN DASH in console output.
Ruff flags this Unicode dash.
🩹 Suggested fix
-print("\nPredicted Las Vegas 2024 Race Pace – Top 5")
+print("\nPredicted Las Vegas 2024 Race Pace - Top 5")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| print("\nPredicted Las Vegas 2024 Race Pace – Top 5") | |
| print("\nPredicted Las Vegas 2024 Race Pace - Top 5") |
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 236-236: String contains ambiguous – (EN DASH). Did you mean - (HYPHEN-MINUS)?
(RUF001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@8-3.py` at line 236, The console output string in the print call containing
"Predicted Las Vegas 2024 Race Pace – Top 5" uses a Unicode EN DASH which Ruff
flags; edit the print statement (the print(...) expression) to replace the EN
DASH with a plain ASCII hyphen/minus (e.g., "Predicted Las Vegas 2024 Race Pace
- Top 5") so the string contains only ASCII characters.
| model.get_booster().save_model("lasvegasmodel.json") | ||
| print("lasvegasmodel.json saved successfully") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file structure and find file 8-3.py
find . -name "8-3.py" -type fRepository: MallamTeja/f1_models
Length of output: 72
🏁 Script executed:
# Read the file to see model instantiation and the problematic lines
cat -n ./8-3.py | head -280Repository: MallamTeja/f1_models
Length of output: 7743
GradientBoostingRegressor doesn't support get_booster() — this will raise AttributeError at runtime.
Use joblib.dump() for scikit-learn model serialization:
Suggested fix
-model.get_booster().save_model("lasvegasmodel.json")
-print("lasvegasmodel.json saved successfully")
+joblib.dump(model, "lasvegasmodel.joblib")
+print("lasvegasmodel.joblib saved successfully")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@8-3.py` around lines 267 - 268, The code calls model.get_booster() which only
exists on XGBoost/LightGBM boosters and will raise AttributeError when model is
a scikit-learn GradientBoostingRegressor; replace the booster save with
scikit-learn serialization by using joblib.dump(model, "lasvegasmodel.joblib")
(or pickle) to persist the trained GradientBoostingRegressor instance and update
the success message accordingly; ensure you import joblib and change the
filename/print to reflect the new format.
| ) | ||
| top5.index = range(1, 6) | ||
|
|
||
| print("\nPredicted Las Vegas 2024 Race Pace – Top 5") |
There was a problem hiding this comment.
Replace EN DASH in console output.
Ruff flags the Unicode dash.
🩹 Suggested fix
-print("\nPredicted Las Vegas 2024 Race Pace – Top 5")
+print("\nPredicted Las Vegas 2024 Race Pace - Top 5")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| print("\nPredicted Las Vegas 2024 Race Pace – Top 5") | |
| print("\nPredicted Las Vegas 2024 Race Pace - Top 5") |
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 236-236: String contains ambiguous – (EN DASH). Did you mean - (HYPHEN-MINUS)?
(RUF001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@8.py` at line 236, The console output string in the top-level print call
print("\nPredicted Las Vegas 2024 Race Pace – Top 5") contains a Unicode en
dash; replace it with an ASCII hyphen-minus (e.g. "Predicted Las Vegas 2024 Race
Pace - Top 5") to satisfy Ruff and avoid non-ASCII punctuation in the print
statement.
| app = FastAPI( | ||
| title="F1 Race Pace Predictor", | ||
| description="API for predicting F1 race pace based on qualifying and weather data", | ||
| version="1.0.0", | ||
| lifespan=lifespan, | ||
| docs_url="/", | ||
| redoc_url=None | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Locate stale references to main1.
rg -n "from\s+main1|import\s+main1" -C2Repository: MallamTeja/f1_models
Length of output: 274
🏁 Script executed:
#!/bin/bash
# Search for any other references to main1 (including in comments, file names, etc.)
rg "main1" -nRepository: MallamTeja/f1_models
Length of output: 124
Update stale import in test_main.py.
Line 3: from main1 import app, ml_models, lookup_data should be from main import app, ml_models, lookup_data. The module main1 no longer exists after consolidation, causing CI test failures.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main.py` around lines 41 - 48, The tests are importing a removed module name;
update the import in test_main.py from the stale module name main1 to the
consolidated module main so the test imports the actual FastAPI app and data
objects; specifically change the import statement that currently reads "from
main1 import app, ml_models, lookup_data" to import app, ml_models, and
lookup_data from main (ensure the symbols app, ml_models, lookup_data referenced
in main.py are exported/available).
| class PredictionInput(BaseModel): | ||
| driver_code: str | ||
| qualifying_time: float | ||
| clean_air_race_pace: float | ||
| driver_code: ver | ||
| qualifying_time: 90.90 | ||
| clean_air_race_pace: 89.01 | ||
| rain_prob: float = 0.0 | ||
| temperature: float = 25.0 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if main.py exists and examine the PredictionInput class around lines 50-55
if [ -f main.py ]; then
echo "=== File exists, showing context around lines 50-55 ==="
sed -n '45,60p' main.py | cat -n
echo ""
echo "=== Full class definition using rg ==="
rg -n "class PredictionInput" -A 10 main.py
else
echo "main.py not found in root directory"
# Try to find it
fd main.py
fiRepository: MallamTeja/f1_models
Length of output: 983
Fix invalid Pydantic field annotations in PredictionInput class.
driver_code: ver is undefined (not a valid type), and float literals (90.90, 89.01) are used as type annotations instead of the float type. This will cause Pydantic validation errors at runtime.
Suggested fix
class PredictionInput(BaseModel):
- driver_code: ver
- qualifying_time: 90.90
- clean_air_race_pace: 89.01
+ driver_code: str = "VER"
+ qualifying_time: float = 90.90
+ clean_air_race_pace: float = 89.01
rain_prob: float = 0.0
temperature: float = 25.0🧰 Tools
🪛 Ruff (0.15.2)
[error] 51-51: Undefined name ver
(F821)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main.py` around lines 50 - 55, The PredictionInput Pydantic model has invalid
annotations: change driver_code: ver to a proper type (e.g., driver_code: str)
and replace the numeric type annotations qualifying_time: 90.90 and
clean_air_race_pace: 89.01 with the float type and move the numeric values to
defaults (e.g., qualifying_time: float = 90.90, clean_air_race_pace: float =
89.01); keep rain_prob: float = 0.0 and temperature: float = 25.0 as-is and
ensure the class still inherits from BaseModel (PredictionInput).
Summary by CodeRabbit
New Features
Bug Fixes
Chores