Skip to content

disable requests from crawlers #39

@iay

Description

@iay

At least one current deployment is seeing requests from crawlers such as Google's. This is presumably because they have found the site through links from other sites.

It would make sense to at least have the ability to block these through a robots.txt file. That's not completely trivial, however, if the context root isn't the site root. It might be worth either making those independent or looking into how to get two application contexts (or static content) into the Jetty instance that Spring Boot includes.

Metadata

Metadata

Assignees

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions