Skip to content

Commit 8c3ed01

Browse files
JosephRRBJosephRRBjosh146ikurecic
authored
GQE demo using Pennylane (#1119)
### Before submitting Please complete the following checklist when submitting a PR: - [ ] Ensure that your tutorial executes correctly, and conforms to the guidelines specified in the [README](../README.md). - [ ] Remember to do a grammar check of the content you include. - [ ] All tutorials conform to [PEP8 standards](https://www.python.org/dev/peps/pep-0008/). To auto format files, simply `pip install black`, and then run `black -l 100 path/to/file.py`. When all the above are checked, delete everything above the dashed line and fill in the pull request template. ------------------------------------------------------------------------------------------------------------ **Title:** Generative quantum eigensolver demo using Pennylane **Summary:** We use Pennylane to generate a static molecular dataset and calculate the corresponding energies to train a small GPT model as described by https://arxiv.org/abs/2401.09253. We show that as training progresses, the GPT model generates operator sequences whose predicted energies more accurately resembles the true energies calculated by Pennylane. In addition, the sampling process is shown to better generate the ground state for better performing models. - Story ticket: https://app.shortcut.com/xanaduai/story/64095/contribute-gqe-demo-as-a-pennylane-demo **Relevant references:** **Possible Drawbacks:** **Related GitHub Issues:** ---- If you are writing a demonstration, please answer these questions to facilitate the marketing process. * GOALS — Why are we working on this now? *Eg. Promote a new PL feature or show a PL implementation of a recent paper.* * AUDIENCE — Who is this for? *Eg. Chemistry researchers, PL educators, beginners in quantum computing.* * KEYWORDS — What words should be included in the marketing post? * Which of the following types of documentation is most similar to your file? (more details [here](https://www.notion.so/xanaduai/Different-kinds-of-documentation-69200645fe59442991c71f9e7d8a77f8)) - [ ] Tutorial - [ ] Demo - [ ] How-to --------- Co-authored-by: JosephRRB <joseph.bunao@xanadu.ai> Co-authored-by: Josh Izaac <josh146@gmail.com> Co-authored-by: Ivana Kurečić <ivana@xanadu.ai>
1 parent 9b43399 commit 8c3ed01

File tree

11 files changed

+781
-0
lines changed

11 files changed

+781
-0
lines changed

_static/authors/joseph_bunao.jpg

400 KB
Loading

_static/authors/joseph_bunao.txt

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
.. bio:: Joseph Bunao
2+
:photo: ../_static/authors/joseph_bunao.jpg
3+
4+
Joseph is a machine learning specialist at Xanadu. His main focus is to accelerate quantum hardware research and processes using machine learning.
Loading
Loading
Loading
Loading
Loading
Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
{
2+
"title": "Generative quantum eigensolver training using PennyLane data",
3+
"authors": [
4+
{
5+
"username": "Joseph"
6+
},
7+
{
8+
"username": "zy_n"
9+
}
10+
],
11+
"dateOfPublication": "2024-09-20T00:00:00+00:00",
12+
"dateOfLastModification": "2024-09-20T00:00:00+00:00",
13+
"categories": ["Quantum Machine Learning", "Quantum Chemistry", "Algorithms"],
14+
"tags": [],
15+
"previewImages": [
16+
{
17+
"type": "thumbnail",
18+
"uri": "_static/demo_thumbnails/regular_demo_thumbnails/thumbnail_generative_quantum_eigensolver.png"
19+
},
20+
{
21+
"type": "large_thumbnail",
22+
"uri": "/_static/demo_thumbnails/large_demo_thumbnails/thumbnail_large_generative_quantum_eigensolver.png"
23+
}
24+
],
25+
"seoDescription": "Learn how you can train a small GPT model using the generative quantum eigensolver (GQE) technique and PennyLane data.",
26+
"doi": "",
27+
"canonicalURL": "/qml/demos/gqe_training",
28+
"references": [
29+
{
30+
"id": "nakaji2024",
31+
"type": "article",
32+
"title": "The generative quantum eigensolver (GQE) and its application for ground state search",
33+
"authors": "K. Nakaji, L. B. Kristensen, J. A. Campos-Gonzalez-Angulo, M. G. Vakili, H. Huang, M. Bagherimehrab, C. Gorgulla, F. Wong, A. McCaskey, J. S. Kim, T. Nguyen, P. Rao, A. Aspuru-Guzik",
34+
"year": "2024",
35+
"journal": "",
36+
"url": "https://arxiv.org/abs/2401.09253"
37+
},
38+
{
39+
"id": "radford2019",
40+
"type": "article",
41+
"title": "Language Models are Unsupervised Multitask Learners",
42+
"authors": "A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever",
43+
"year": "2019",
44+
"journal": "OpenAI blog",
45+
"url": "https://openai.com/index/better-language-models/"
46+
},
47+
{
48+
"id": "vaswani2017",
49+
"type": "article",
50+
"title": "Attention is All you Need",
51+
"authors": "A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin",
52+
"year": "2017",
53+
"journal": "Advances in Neural Information Processing Systems",
54+
"url": "https://papers.nips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html"
55+
}
56+
],
57+
"basedOnPapers": [
58+
"10.48550/arXiv.2401.09253"
59+
],
60+
"referencedByPapers": [],
61+
"relatedContent": [
62+
{
63+
"type": "demonstration",
64+
"id": "tutorial_vqe",
65+
"weight": 1.0
66+
},
67+
{
68+
"type": "demonstration",
69+
"id": "tutorial_vqe_qng",
70+
"weight": 1.0
71+
}
72+
],
73+
"hardware": []
74+
}

0 commit comments

Comments
 (0)