|Project Name||Stars||Downloads||Repos Using This||Packages Using This||Most Recent Commit||Total Releases||Latest Release||Open Issues||License||Language|
|Python||2,978||11 days ago||15|
|Python Books && Courses|
|Learn Python||152||5 years ago|
|Python Top 45 Articles of 2017|
|Letterboxd_recommendations||148||a day ago||5||gpl-3.0||Python|
|Scraping publicly-accessible Letterboxd data and creating a movie recommendation model with it that can generate recommendations when provided with a Letterboxd username|
|Awesome Python Primer||78||10 months ago||mit||Python|
|自学入门 Python 优质中文资源索引，包含 书籍 / 文档 / 视频，适用于 爬虫 / Web / 数据分析 / 机器学习 方向|
|Jbin will gather all the URLs from the website and then it will try to expose the secret data from them such as API keys, API secrets, API tokens and many other juicy information.|
|Zimfarm||65||3 days ago||2||September 01, 2022||59||gpl-3.0||Python|
|Farm operated by bots to grow and harvest new zim files|
|Python Adv Web Apps||61||2 months ago||1||mit||Python|
|Updated python-beginners docs and examples|
|Euro2016_terminalapp||55||5 years ago||1||HTML|
|:soccer: Instantly find :trophy:EURO 2016 live-streams & highlights, now a Web App!|
|Scrapy Flask Imdb Python||44||8 years ago||2||Python|
|Python project scraping imdb and web application implemented using Flask.|
|Newsemble||42||a year ago||Python|
|API for fetching data from news websites.|
Used in conjunction with the book Automate the Boring Stuff with Python, by Al Sweigart (2015). There is a link to download his code under "Additional Content" on that page.
I adopted Sweigart’s text in 2017 after examining several others. I’m really pleased with the way he introduces the basics of Python 3. I decided to abandon Python 2 in 2017, and it’s great to have found a beginner text that explains enough but not too much. I love Sweigart’s style and his examples.
The one thing I dislike in Sweigart’s book is his assumption that we would be using IDLE. We write our code in Atom and run it in Terminal (or PowerShell on Windows).
Below you'll see an overview of the contents. Within each folder, you'll find a README and example Python files. In the course, we cover web scraping with Python and also web apps using the Flask framework. Inside the web_scraping folder and the flask folder here, you'll find a lot more information and examples.
In the course, we spend about four weeks on scraping and another four weeks on Flask.
Here is the week-by-week schedule for the course. Python starts in week 5 there.
Students read chapters 1 and 2 in Sweigart. Some scripts in the week01 folder are based on Sweigart’s — naturally, he has more examples than only these. See the README in the week01 folder for more information.
Example files in this folder cover if-statements, for-loops, while-loops, and
random.randint. Also (very basic):
Students read chapter 3, “Functions,” in Sweigart. See the README in the week02 folder for more information.
Example files in this folder cover functions, arguments, the
return statement, scope of variables, and exception handling:
Writing modular code is not only a good practice; it also helps you to write functions you can test reliably and reuse in future work. See modular-code in the week02 folder for more information.
Students read chapters 4 and 8 in Sweigart. See the README in the week03 folder for more information.
Example files in this folder cover loops and lists, and how to open, read, and close files.
Chapter 4 covers just about everything one needs to know about Python lists. The README highlights some of the methods, etc., we will use most often, including some things Sweigart does not cover.
Chapter 8 covers reading and writing files with Python. The information we need most often is on pages 180-183. There are a couple of things not covered that I have explained in the README.
Students read Chapters 5 in Sweigart and learn about Python dictionaries. See the README in the week04 folder for more information.
Students use Jupyter Notebook to complete several assignments. The cheat sheet is helpful after Jupyter Notebook has been installed and they need to launch it, save their work, and close it correctly.
Students are introduced to web scraping with the BeautifulSoup library in the second week. See the README in the web_scraping folder for instructions to install BeautifulSoup, as well as some basic uses of this scraping library.
The README in the mitchell-ch3 folder supplements chapter 3 in Web Scraping with Python, by Ryan Mitchell. The chapter is very challenging for beginners, so here are a couple of
.py files and examples to ease the way.
Example files are included for scraping all URLs from a page, and for scraping the same data items from numerous pages, using a list of URLs.
The README in the more-from-mitchell folder highlights the points we cover in our third week with Web Scraping with Python, by Ryan Mitchell. We don’t have time to read the entire book, so we need to jump around and get acquainted with some common scraping problems and their solutions.
This section includes using Selenium, HTTP headers, writing scraped data to CSV files, the sleep timer in Python, and parsers.
Example files are included for writing scraped data to CSV files and to a MySQL database, using Selenium, and sending email from a Python script.
Students are introduced to Flask, a Python framework, in the fifth week of Python. See the README in the flask folder for details.
This section has several parts, explaining templates, app deployment, Flask-WTF forms, and Flask-SQLAlchemy for database apps.