Skip to content

Data fetching,parsing and caching needs a rework #85

@Avi-E-Koenig

Description

@Avi-E-Koenig

So currently the flow is

Main Data generation and storing
1.fetching a readme file.
2.parsing for repos (owner/repoName) and companies (name) data.
3.making GQL request for every company and repo using the above
4.storing the results into redis as a single json

on every data request:

1.redis is queried for the data
2.if redis holds the data then its returned in response
3.the whole chain of "Main Data generation and storing" above is triggered then said data is returned

Main issue:
no fine search or diff managment
meaning dozens of GQL request everytime instead of merely tracking changes and requesting for the affected/new companies/repos
the rate limit for standart key can lead in inconsistent data display to client

**another major issue
Source readme needs to be optimized to avoid bad GQL calls.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions