YGGo! Distributed Web Search Engine
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ghost 48664f0caf fix zip close, loop brake condition 2 years ago
config make local snap storage optimization 2 years ago
crontab fix zip close, loop brake condition 2 years ago
database make local snap storage optimization 2 years ago
library add tables optimization to the cron/cleaner task 2 years ago
media make local snap storage optimization 2 years ago
public make local snap storage optimization 2 years ago
storage/cache add image storage cache folder 2 years ago
.gitignore update .gitignore 2 years ago
LICENSE change repository address 2 years ago
README.md update readme 2 years ago

README.md

YGGo! - Distributed & Open Source Web Search Engine

Проект присвячується захисникам міста Бахмут

Written by inspiration to explore Yggdrasil ecosystem, because of last YaCy node there was discontinued. This engine also could be useful for crawling regular websites, small business resources, local networks.

The project goal - simple interface, clear architecture and lightweight server requirement.

Overview

Home page

https://github.com/YGGverse/YGGo/tree/main/media

Online instances

Requirements

php8^
php-dom
php-pdo
php-curl
php-gd
php-mbstring
php-zip
php-mysql
sphinxsearch

Installation

  • The web root dir is /public
  • Deploy the database using MySQL Workbench project presented in the /database folder
  • Install Sphinx Search Server
  • Configuration examples are placed at /config folder
  • Make sure /storage, /public/snap folders writable
  • Set up the /crontab scripts by following example

JSON API

Build third party applications / index distribution.

Could be enabled or disabled by API_ENABLED option

Address
/api.php

Returns search results.

Could be enabled or disabled by API_SEARCH_ENABLED option

Request attributes
GET action=search  - required
GET query={string} - optional, search request, empty if not provided
GET type={string}  - optional, filter mime type of available or empty
GET page={int}     - optional, search results page, 1 if not provided
GET mode=SphinxQL  - optional, enable extended SphinxQL syntax
Hosts distribution

Returns node hosts collected with fields provided in API_HOSTS_FIELDS option.

Could be enabled or disabled by API_HOSTS_ENABLED option

Request attributes
GET action=hosts - required
Application manifest

Returns node information.

Could be enabled or disabled by API_MANIFEST_ENABLED option

Request attributes
GET action=manifest - required

Search textual filtering

Default constructions
operator OR:

hello | world

operator MAYBE:

hello MAYBE world

operator NOT:

hello -world

strict order operator (aka operator "before"):

aaa << bbb << ccc

exact form modifier:

raining =cats and =dogs

field-start and field-end modifier:

^hello world$

keyword IDF boost modifier:

boosted^1.234 boostedfieldend$^1.234

Extended syntax

https://sphinxsearch.com/docs/current.html#extended-syntax

Could be enabled with following attributes

GET m=SphinxQL

Roadmap

Basic features
  • Web pages full text ranking search
  • Unlimited content type groups
  • Flexible settings compatible with IPv4/IPv6 networks
  • Index explorer
  • Safe images preview
  • Extended search syntax support
  • Compressed page snaps history
    • Local
    • Remote
UI
  • CSS only, JS-less interface
  • Unique ident icons for sites without favicons
  • Content genre tabs (#1)
  • Page index explorer
    • Meta
    • Snaps
    • Referrers
  • Results with found matches highlight
  • The time machine feature by content snaps history
API
  • Index API
    • Manifest
    • Search
    • Hosts
    • MIME list
  • Context advertising API
Crawler
  • Auto crawl links by regular expression rules
    • Pages
    • Manifests
  • Robots.txt / robots meta tags support (#2)
  • Specific rules configuration for every host
  • Auto stop crawling on disk quota reached
  • Transactions support to prevent data loss on queue failures
  • Distributed index crawling between YGGo nodes trough manifest API
  • MIME Content-type settings
  • Ban non-condition links to prevent extra requests
  • Debug log
  • Page content snaps generation
    • Local
    • Remote
  • Indexing new sites homepage in higher priority
  • Redirect codes extended processing
  • Palette image index / filter
  • Crawl queue balancer, that depends of CPU available
Cleaner
  • Deprecated DB items auto deletion / host settings update
    • Pages
    • Manifests
    • Logs
      • Crawler
      • Cleaner
  • Deprecated history snaps removing
  • Banned resources reset by timeout
  • Debug log
Other
  • Administrative panel for useful index moderation
  • Deployment tools

Contributions

Please make a new branch of master|sqliteway tree for each patch in your fork before create PR

git checkout master
git checkout -b my-pr-branch-name

See also: SQLite tree

Donate to contributors

License

Feedback

Please, feel free to share your ideas and bug reports here or use sources for your own implementations.

Have a good time.