You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Start of adding CMS page
* CMS delete works
* CMS first attempt feature complete
* First attempt at API Reference
* tidy up API ref
* cache parameter
* Data based examples on input forms, Inline deletes, Clickable Facets, Some tidy up
* Cleaner local configuration
* Readme tidy up
* Update README.md
* Use Passport to allow basic HTTP auth in lockdown mode
* Update README.md
Copy file name to clipboardExpand all lines: README.md
+179-9Lines changed: 179 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Overview: Simple Search Service
2
2
3
-
Simple Search Service is an IBM Bluemix app that lets you quickly create a faceted search engine, exposing an API you can use to bring search into your own apps. The service also creates a website that lets you preview the API and test it against your own data.
3
+
Simple Search Service is an IBM Bluemix app that lets you quickly create a faceted search engine, exposing an API you can use to bring search into your own apps. The service also creates a website that lets you preview the API and test it against your own data as well as manage your data via a simple CMS.
4
4
5
5
Once deployed, use the browser to upload CSV or TSV data. Specify the fields to facet, and the service handles the rest.
6
6
@@ -12,14 +12,16 @@ The application uses these Bluemix services:
12
12
* a Cloudant database
13
13
* a Redis in-memory database from Compose.io (Optional)
14
14
15
-
Once the data is uploaded, a CORS-enabled, cached API endpoint is available at `<your domain name>/search`. The endpoint takes advantage of Cloudant's built-in integration for Lucene full-text indexing. Here's what you get:
15
+
Once the data is uploaded, you can use the UI to browse and manage your data via the integrated CMS. Additionally, a CORS-enabled, cached API endpoint is available at `<your domain name>/search`. The endpoint takes advantage of Cloudant's built-in integration for Lucene full-text indexing. Here's what you get:
You can use this along with the rest of the API to integrate the Simple Search Service into your apps. For a full API reference, [click here](API Reference.md).
24
+
23
25
While this app is a demo to showcase how easily you can build an app on Bluemix using Node.js and Cloudant, it also provides a mature search API that scales with the addition of multiple Simple Search Service nodes and a centralized cache using Redis by Compose.io. In fact, a similar architecture powers the search experience in the Bluemix services catalog.
24
26
25
27
A more detailed walkthrough of using Simple Search Service is available [here](https://developer.ibm.com/clouddataservices/2016/01/21/introducing-simple-faceted-search-service/).
@@ -42,13 +44,18 @@ The fastest way to deploy this application to Bluemix is to click the **Deploy t
42
44
43
45
Clone this repository then run `npm install` to add the Node.js libraries required to run the app.
44
46
45
-
Then create an environment variable that store your Cloudant URL:
47
+
Then create some environment variables that contain your Cloudant URL, and optionally, your Redis details:
replacing the `USERNAME`, `PASSWORD` and `HOSTNAME` placeholders for your own Cloudant account's details.
58
+
replacing the `USERNAME`, `PASSWORD` and `HOSTNAME` placeholders for your own Cloudant account's details. If your Redis server does not require a password, do not set the `SSS_REDIS_PASSWORD` environment variable.
52
59
53
60
Then run:
54
61
@@ -58,7 +65,7 @@ node app.js
58
65
59
66
## Lockdown mode
60
67
61
-
If you have uploaded your content into the Simple Search Service but now want only the `/search` endpoint to continue working, then you can enable "Lockdown mode".
68
+
If you have uploaded your content into the Simple Search Service but now want only the `/search` endpoint to be available publicly, you can enable "Lockdown mode".
62
69
63
70
Simply set an environment variable called `LOCKDOWN` to `true` before running the Simple Search Service:
64
71
@@ -69,9 +76,172 @@ node app.js
69
76
70
77
or set a custom environment variable in Bluemix.
71
78
72
-
When lockdown mode is detected, all web requests will be get a `403` response except the `/search` endpoint which will continue to work. This prevents your data being modified until lockdown mode is switched off again, by removing the environment variable.
79
+
When lockdown mode is detected, all web requests will be get a `401 Unauthorised` response, except for the `/search` endpoint which will continue to work. This prevents your data being modified until lockdown mode is switched off again, by removing the environment variable.
80
+
81
+
If you wish to get access to the Simple Search Service whilst in lockdown mode, you can enable basic HTTP authentication by setting two more environment variables:
82
+
83
+
*`SSS_LOCKDOWN_USERNAME`
84
+
*`SSS_LOCKDOWN_PASSWORD`
85
+
86
+
When these are set, you are able to bypass lockdown mode by providing a matching username and password. If you access the UI, your browser will prompt you for these details. If you want to access the API you can provide the username and password as part of your request:
87
+
88
+
```sh
89
+
curl -X GET 'http://<yourdomain>/row/4dac2df712704b397f1b64a1c8e25033' --user <username>:<password>
90
+
```
91
+
92
+
## API Reference
93
+
The Simple Search Service has an API that allows you to manage your data outside of the provided UI. Use this to integrate the SImple Search Service with your applications.
94
+
95
+
### Search
96
+
97
+
Search is provided by the `GET /search` endpoint.
98
+
99
+
#### Fielded Search
100
+
Search on any of the indexed fields in your dataset using fielded search.
Search across all fields in your dataset using free-text search.
111
+
112
+
```bash
113
+
# Return any docs 'black' is mentioned
114
+
GET /search?q=black
115
+
```
116
+
117
+
#### Pagination
118
+
Get the next page of results using the `bookmark` parameter. This is provided in all results from the `/search` endpoint (see example responses below). Pass this in to the next search (with the same query parameters) to return the next set of results.
119
+
120
+
```bash
121
+
# Return the next set of docs where 'black' is mentioned
122
+
GET /search?q=black&bookmark=<...>
123
+
```
124
+
125
+
It is possible to alter the amount of results returned using the `limit` parameter.
126
+
127
+
```bash
128
+
# Return the next set of docs where 'black' is mentioned, 10 at a time
129
+
GET /search?q=black&bookmark=<...>&limit=10
130
+
```
131
+
132
+
It is possible to alter whether or not to use the cache via the `cache` parameter (defaults to true).
133
+
134
+
```bash
135
+
# Return the next set of docs where 'black' is mentioned, don't use the cache
136
+
GET /search?q=black&bookmark=<...>&cache=false
137
+
```
138
+
139
+
#### Example Response
140
+
141
+
All searches will respond in the same way.
142
+
143
+
```
144
+
{
145
+
"total_rows": 19, // The total number of rows in the dataset
146
+
"bookmark": "g1AAAA...JjFkA0kLVvg", // bookmark, for pagination
147
+
"rows": [ // the rows returned in this response
148
+
{ ... },
149
+
{ ... },
150
+
{ ... },
151
+
{ ... },
152
+
{ ... },
153
+
{ ... },
154
+
{ ... },
155
+
{ ... },
156
+
{ ... },
157
+
{ ... },
158
+
{ ... },
159
+
{ ... },
160
+
{ ... },
161
+
{ ... },
162
+
{ ... },
163
+
{ ... },
164
+
{ ... },
165
+
{ ... },
166
+
{ ... }
167
+
],
168
+
"counts": { // counts of the fields which were selected as facets during import
169
+
"type": {
170
+
"Black": 19
171
+
}
172
+
},
173
+
"from_cache": true, // did this response come from the cache?
174
+
"_ts": 1467108849821
175
+
}
176
+
```
177
+
178
+
### Get a specific row
179
+
180
+
A specific row can be returned using it's unique ID, found in the `_id` field of each row. This is done by using the `GET /row/:id` endpoint.
181
+
182
+
```bash
183
+
GET /row/44d2a49201625252a51d252824932580
184
+
```
185
+
186
+
This will return the JSON representation of this specific row.
187
+
188
+
### Add a new row
189
+
190
+
New data can be added a row at a time using the `POST /row` endpoint.
191
+
192
+
Call this endpoint passing in key/value pairs that match the fields in the existing data. There are __NO__ required fields, and all field types will be enforced. The request will fail if any fields are passed in that do not already exist in the dataset.
193
+
194
+
```bash
195
+
POST /row -d'field_1=value_1&field_n=value_n'
196
+
```
197
+
198
+
The `_id` of the new row will be auto generated and returned in the `id` field of the response.
199
+
200
+
```json
201
+
{
202
+
"ok":true,
203
+
"id":"22a747412adab2882be7e38a1393f4f2",
204
+
"rev":"1-8a23bfa9ee2c88f2ae8dd071d2cafd56"
205
+
}
206
+
```
207
+
208
+
### Update an existing row
209
+
210
+
Exiting data can be updated using the `PUT /row/:id` endpoint.
211
+
212
+
Call this endpoint passing in key/value pairs that match the fields in the existing data - you must also include the `_id` parameter in the key/value pairs. There are _NO_ required fields, and all field types will be enforced. The request will fail if any fields are passed in that do not already exist in the dataset.
213
+
214
+
> *Note:* Any fields which are not provided at the time of an update will be removed. Even if a field is not changing, it must always be provided to preserve its value.
215
+
216
+
The response is similar to that of adding a row, although note that the revision number of the document has increased.
217
+
218
+
```json
219
+
{
220
+
"ok":true,
221
+
"id":"22a747412adab2882be7e38a1393f4f2",
222
+
"rev":"2-6281e0a21ed461659dba6a96d3931ccf"
223
+
}
224
+
```
225
+
226
+
### Deleting a row
227
+
228
+
A specific row can be deleting using it's unique ID, found in the `_id` field of each row. This is done by using the `DELETE /row/:id` endpoint.
229
+
230
+
```bash
231
+
DELETE /row/44d2a49201625252a51d252824932580
232
+
```
233
+
234
+
The response is similar to that of editing a row, although again note that the revision number of the document increased once more.
235
+
236
+
```json
237
+
{
238
+
"ok":true,
239
+
"id":"22a747412adab2882be7e38a1393f4f2",
240
+
"rev":"3-37b4f5c715916bf8f90ed997d57dc437"
241
+
}
242
+
```
73
243
74
-
###Privacy Notice
244
+
## Privacy Notice
75
245
76
246
The Simple Search Service web application includes code to track deployments to Bluemix and other Cloud Foundry platforms. The following information is sent to a [Deployment Tracker](https://github.com/IBM-Bluemix/cf-deployment-tracker-service) service on each deployment:
77
247
@@ -86,7 +256,7 @@ This data is collected from the `VCAP_APPLICATION` environment variable in IBM B
86
256
87
257
For manual deploys, deployment tracking can be disabled by removing `require("cf-deployment-tracker-client").track();` from the end of the `app.js` main server file.
0 commit comments