Developing PHP Developers: The Many Special Talents of Jansen Price

NerdCast

We are joined this week by Jansen Price, a Principal Software Engineer (big kahuna of PHP) from The Nerdery. He shares a unique perspective on how we handle ongoing education and facilitate growth as developer. He talks about the book he is writing about learning object-oriented languages and what it takes to learn new things.

Easter Egg: The intro and outro music is performed by Jansen and his mad beat boxing skills.

Episode: #94

Host:Ryan Carlson

Guests: Jansen Price - Principal Software Engineer (PHP)

Listen Now: Running Time: 0:21:43 / Subscribe on iTunes

Play

Comment

Filed under NerdCast

DataImportHandler: Recreating problems to discover their root

When a client asked me to address the performance problems with their Solr full-imports (9+ hours), I knew I was going to have to put on my computer-detective hat.

First, I needed to understand the current import process and environment. Their import process used a Solr contrib library called DataImportHandler. This library allows an end user to configure queries to pull data from various endpoints, which will then be imported into the Solr index. It supports a variety of data endpoints, such as Solr indexes, XML files – or in this client’s case, a database. The DataImportHandler defines two types of imports, full and delta. The query for the full index should be written to pull all the data that is required for the index. The query for the delta index should be written to pull only the data that has changed since the last index was run.

Once I understood the basics of the DataImportHandler, I needed to reproduce the problem. The DataImportHandler has a status page which displays – in real-time – the number of requests made to the datasource, the number of rows fetched from the datasource, the number of processed records, and the elapsed time.  I had the client start a full-import and monitored this page.

I knew that a full-import would take 9+ hours and the end result would be slightly more than 6 million records in the Solr index, but the number of fetches to the datasource and the number of rows fetched were trending up significantly faster than the number of records processed. By the end, there had been over 24 million requests to the datasource and over 31 million rows fetched from the datasource. Obviously, this was a significant source of the 9+ hours a full import, but it wasn’t clear why so many queries and fetches were being done.

With a possible source of the performance problems identified, I needed to look at the queries that were being used by the DataImportHandler. I dug into the configuration files and found this:

<entity name=”z” query=”select * from a”>
<entity name=”y” query=”select * from b WHERE a_id=’{$z.id}’ />
<entity name=”x” query=”select * from c WHERE a_id=’{$z.id}’ />
<entity name=”u” query=”select * from d WHERE a_id=’{$z.id}’ />
<entity name=”t” query=”select * from e WHERE a_id=’{$z.id}’ />
</entity>

A quick Google search confirmed my fear. For each record in the outer query, each of the inner queries were done once. I did a quick count of the number of records in the outer query, which was slightly more than six million. Multiply that by the number of inner queries, four, and you get 24 million requests to the datasource. Both of those numbers were in line with the results from the status page, though the rows fetched from datasource was still off for an unexplained reason. A quick review of the schema showed that both the “t” and “x” entities were multivalued, which means the database could return more than one record for them. This accounted for 31 million rows fetched from the datasource.

I now had an explaination for the numbers I was seeing, but I still hadn’t confirmed that the DataImportHandler was bottlenecking on the database queries. Unfortunately, there wasn’t a good way to determine this. The best idea I came up with was to convert the database to another format that the DataImportHandler would read faster, but it was going to take a non-trivial amount of work to set that up. I settled on using a combination of “SHOW PROCESSLIST” on the MySQL server and strace to monitor the import while it ran. By the end of the day, the problem was obvious: the DataImportHandler was spending most of its time waiting for the database to send data after each query.

With the quantity of queries representing the majority of the full-import run time, I began researching alternative ways of fetching the data. That’s when I found the CachedSqlEntityProcessor. This processor would fetch all the records for each entity, then stitch them together on the fly. In my example, it would reduce the number of requests to the datasource to 5! I immediately rewrote the entities to use this processor and started up another full-import.

Three hours later, the import was done. A 66% improvement was satisfying, but the client was hoping for something closer to an hour. That meant it was time to search for bottlenecks again. Another strace test showed the biggest offender was waiting for data from the database, so I focused on tuning the MySQL server. Unfortunately, this was a mistake. I didn’t critically think about what this bottleneck actually was. No matter what I did to the MySQL settings, I never saw >1% improvement in the run time.

After a signficiant amount of frustration, I realized the actual problem, disk IO. It turns out that running a MySQL server on EC2/EBS is a terrible idea for a number of reasons.

* By default, EC2 internet and EBS traffic occur on the same network. Thankfully, a simple setting, EBS Optimized, can be flipped to give EC2 a dedicated connection to EBS volumes.
* Standard EBS volumes are not suited for sustained load. There is no guarantee of IOPS, so they can fluctuate significantly. Thankfully, you can create provisioned IOPS EBS volumes, which allow you to define what kind of performance you need.
* Provisioned IOPS EBS volumes have max IOPS values.

The best solution would have been to move the database into Amazon RDS, which fixes most of these problems, but it was too big a change to be made quickly. Instead, we settled on making both the Solr and database servers EBS optimized and setting up provisioned IOPS EBS volumes. After playing with some IOPS values, we settled on 2000.

It was finally time to run a complete test will all the changes in place. It had taken weeks of my time, so the anticipation was killing me. The final full-import time was 35 minutes.

Comment

Filed under Tech Tips

Hey, Teachers: Monetizing Learning Online (Part 1 – Udemy)

Learning is big business. It seems that the traditional systems of education have long been established and that private-college education can be a cash cow. However, with those formal education systems being government regulated, it should not be a surprise that many different types of learning platforms have emerged to meet the growing need of learning. These platforms range in delivery mediums and all of us probably have our personal favorite.

This post is part of a series of posts that take a look at the learning marketplace. While we certainly won’t cover every specific facet, I do want to discuss the online platforms that exist and how you might tap into them – both to learn, and to teach.

For some quick context, I have been teaching for a college online for the past four years. All my interactions with my students have been through a Video Live chat, phone calls, and e-mail. I am all too familiar with the accreditation process of the Department of Education. Additionally, I would say that most of what I use today in my professional career, I started learning from online platforms such as Lynda.com and DigitalTutors.com. To some extent, I have them to thank for my career. Finally, I have been crafting my own online courses, and will refer to them often as I talk about the different platforms available to learn from.

Before we tackle our first platform, I wanted to share how I like to explain learning; I consider it a three-step process. The first is what we traditionally think of when we think of “learning” something new. It’s the collection of raw information from a resource. For example, when I lecture to my class, I am executing this first step in my students learning. Another example would be reading a book on the basics of Photoshop. This is where tools and methods are explained. The second step is the actual application of knowledge from the learner on their own. Where they take the tools they have learned and apply them. The final step is the demonstration of knowledge to a master, receiving feedback and validation. This is an important step because it gives the learner a chance to be corrected. You can think of this as “learning through mentorship.” Another example: when I correct assignments, I do not simply slap a grade or award points onto a submission, but rather, I go into detail about what worked and what didn’t. If something is not working for a student, I talk them through the process I would use.

Another quick way to break it down is this: Step one is about building “knowledge nodes,” step two is about forming connections from node to node, and step three is the validation that those nodes are connected properly, and if not, getting it corrected.

When we examine the existing platforms, it is important to identify which of these steps are being properly engaged, and which are not.

The first platform I want to discuss is Udemy. Udemy is sort of the “every persons” learning platform. Students can go there to learn about all sorts of things, ranging from your first days in Photoshop to learning the basics of Woodworking. As a Teacher, it is hands down the easiest of the platforms to engage. Anyone can sign up to become a teacher for a course that will be listed as free. If you would like to charge for your course, there is a short, painless vetting period, where a Udemy administrator will validate your credentials. They are not looking for teaching experience, but rather than you have some domain knowledge on the topics you would like to teach.

Having gone through the process of becoming a “Paid Instructor,” I can say that it was very quick and without incident. From there, you are given online tools to start building-out your class. They are familiar, easy to master tools that help you populate your course with information that you create. It’s video driven, so you do need to have a bit of knowledge on properly creating videos. And if you don’t, there is a Udemy course for that! Udemy themselves encourages the creation of additional content to engage multiple learner types. For example, not only did I post video lessons for my first course, but I also created a written version for users to download as well. The content has organization tools to curate the content into chapters that contain lessons. Finally, there are tools to help showcase your class and other administrative side tasks, such as pricing your class (ranges of courses can be anywhere from $5 – $199 and beyond).

Outside of that is an  interesting mix of tasks and information. Udemy encourages you to join “Udemy Studio,” a Facebook group of Udemy instructors. The purpose is to share information and ask questions around how courses are constructed. The other thing Udemy talks a great deal about is the marketing of your course. Whoa. Wait. What?

That’s right, you are solely responsible for getting your name out there. While Udemy does take an active role in promoting some courses, it accounts for very little of the courses that make it to their market. And in doing so, usually takes a bigger cut of your course sale (which starts at 50% of the course sale). So you are on your own unless you are able to get noticed by Udemy, and willing to take a smaller cut of the revenue pie. Which brings up the next reality of Udemy.

Udemy learners are accustomed to getting discounted or free courses, to the point of which Udemy themselves have written a few blog posts talking about how to “market your class.” These posts are all about self-promoting through your networks (which is something everyone should do when releasing any product) but also creating and distributing coupon codes. Having gone through the process covered in their blog posts, I will certainly agree that it helps bring in students. The reason I bring this up is because if you start looking at classes, you will see that one might have 1,000+ students and is charging $60 a head. You might instantly thing ‘Wow, $60k! Even half of that is great!’ The reality is that the instructor did not pull in $30k in this example. Chances are, they brought in somewhere in the neighborhood of $1-5k.

Your typical coupon code that gains traction will be around 75-100% off the course. Because, honestly, who can resist the big tag that says “90% off”? At that point it could be just about anything and it would at least grab your attention. From there, you will lose another 50% off the net sale. So offering a 90% off coupon for your course, will allow you to capture about 5% of the course list price. Then you will need to claim it on your taxes, so there is that, too. Unofficially documented is the practice of marking your course up, only to heavily discount it.

Circling back to the Learner perspective, Udemy can also be hit-or-miss on engaging the three steps of learning I covered earlier. There is no doubt that the first step is hit – the teaching of raw skills. But the application is up to the learner, and if the instructor gives no clear “here is how to practice,” it might be up to the learners themselves to figure that out. The final step is also hit or miss, it really depends on the engagement level of the instructor and whether or not they are using some of the non-required tools (such as quizzing). I reach out to my student who I know are nearing course completion to see if there is anything I can help with or take a look at. But this is a rare practice among Udemy instructors, as there is no accountability around the practice. The tools themselves for engaging students are also very primitive. The best I can do is scroll through an un-organized list of students and my only option is to send them a Udemy message.

The point I want to make here is that Udemy is a fantastic platform for those who need a low-entry barrier into the educational market. This could come from lack of experience, or perhaps some of the other platforms don’t work for your course idea. It gives you a venue for online education, free of charge, but if you start making money, you will have to share (a lot). If you are looking to capture huge returns in the educational space, Udemy is probably not the place you want to focus. If learner outcomes are your focus, Udemy has some tools, but they are certainly limited, so once again Udemy is probably not the platform to focus on.

So where do you go? Well that is a question we will explore more in future posts, covering the different platforms such as Skillshare and Pathwright. But now would be a good time to start thinking about what you want to get out of bringing your knowledge online. Are you looking to make money or simply enjoy teaching? Do you have a network of potential students built or are you starting from scratch? Are you looking to focus on a small group of learners and see results, or offer the information to a broader group with less interaction and validation? As we dive deeper into the platforms and their focuses, having the answers to these questions will inform what your education monetization plan looks like.

Comment

Filed under Articles

Mapmaker, Mapmaker, Map Me a (Google) Map

A picture of google maps with the search field saying Jane stop this crazy thing

So you want to embed Google Maps in your website. Maybe you have a list of locations you want to display, or perhaps you need to provide directions or perform simple GIS operations. You’re a pragmatic person, so you don’t want to reinvent the wheel.  You’ve settled on Google Maps as your mapping platform, acquired your API key, and you’re raring to go. Awesome! I’m so excited! Google Maps has great bang for the buck, and their API is well documented and easy enough to use.  But there’s a downside. Google Maps has become the Power Point of cartography.

But all that’s just fine, because we can still do some great work with this tool. I’ve written some general tips that I’ve learned after making a few of those “Contact Us” and “Locations” pages you see, but they are far from prescriptive. You, dear reader, are the adult in the room. All of these tips should be taken with a grain of salt.

I’ve written this article from the perspective of someone who is familiar with Javascript and DOM events, but I also hope it will raise important questions and gotchas for people who still want to have great maps and have chosen Google. This article should take about five minutes to read. Let’s get started!

You have options. Specifically, you have MapOptions.

Even a cursory glance over the documentation for MapOptions will be worth your time.  This is Google Map’s “Look what I can do!” section.  Want a simple map without all the UI bureaucracy?  Oh look, disableDefaultUI right there! How nice!  Most UX and UI tweaks can be done with a careful and considered use of MapOptions. Experiment with a variety of configurations.

Own your map. Own your Style.

Various styled and colorized maps juxtaposed with Andy Warhol's classic screen printing of Marilyn Monroe.

Google Maps v3 has Styled Maps. For a crash course, take a quick jaunt over to SnazzyMaps to get some inspiration. Note how their own map style reinforces their brand.  If you really wanna get your hands dirty, though, you’ll have to learn Google’s Styled Maps from the ground up. The results are worth it. For the extra mile, you can style Map Markers and Info Boxes too. Be sure to check Google’s Styled Map Wizard.  You can also head over to Stadtwerk.org’s Styled Map Colorizr. Don’t forget your color theory!

Remove irrelevant labels.

A simple comparison of a map with and without labels.

If your labels compete with the data, get rid of them or find a way to quiet them. Do your users need to be told where the North Atlantic Ocean is?  Do you need to know the province names AND the city names? This isn’t to say that all labels should be removed, but hiding extraneous information will give your data more elbow room.  Don’t make your map be all things to all people; make it the right thing for your people.

Stay on topic; constrain the view.

Hi.  I have ADD. If a content manager takes about three hours to enter in the geo locations of all 52 of artisan waffle shops onto a map of North America, you can bet within 10 seconds I’ll have already left the confines of North America and centered my map on Ouagadougou because OOH BUTTERFLIES!

Stay on topic and stay the course.  Reducing scope also helps you focus your own optimization efforts. You’ll enjoy a reduced workload, and your user will enjoy an increased attention span.

Consider setting minimum and maximum zoom in the MapOptions object.  As far as constraining the viewport, you may have to write some custom code to stay on topic. If users need to view a particular location up close, it’s very easy to provide an external link to maps.google.com.

Consider Clusters for large amounts of Map Markers.

MarkerVsCluster

https://developers.google.com/maps/articles/toomanymarkers

Watch out for UX Gotchas!

Avoid full bleed on mobile devices, as user may be unable to scroll. Give small gutters on either side to give enough affordance to scroll freely.

Imagine a long blog post (like this one) where there was a map near the bottom. A smartphone user scrolls down, dragging their finger across the glass. Eventually the large map scrolls into view, and the scrolling momentum takes the map entirely into their view.  The user intends to skip this content and continue scrolling down. They use the same action – they have no choice. GOTCHA! Instead of scrolling the document, the map captures the event and pans the map. If the map takes up the entire viewport, the user is effectively trapped. What an awful thing to happen! On mobile, be very very careful with maps. Your users will reach Antarctica before they reach the footer.

A similar case can happen with mouse scrolling on desktops. Turning off zoom in the API prevents scroll event captures, which can be frustrating.  Add a zoom control if your design calls for it, but consider disabling scroll-to-zoom events.  Again, pay attention to MapOptions.

Maps are for everyone. Support Localization.

2014-02-02_1710-2

Google has a wide range of supported languages for its map API, but you may need to tweak your javascript tags to enable them.  Not only are labels properly localized, but so are controls and even the routing directions.  If you support internationalized content, this little tweak is definitely worth your time. 

There is life outside of Google Maps.

Mercator projection comparing supposed size of Greenland against Size of the continent of Africa

Google Maps is an excellent tool, but it may not always be the right tool for the job. Ask yourself what it is you want to show. Scale matters. If you want to show people how to get to your locations, then Google Maps is often an appropriate tool.  If you want to show that you have an international presence, then Google Maps’ default Web Mercator projection may not be… politically correct.

Without falling too deep into the cartography projection rabbit hole, do consider using Winkel Tripel or Robinson projections instead of Web Mercator when your data is presented at the global level.  These projections hew closer to the way the world actually looks – note the size of greenland compared to the continent of Africa.  However, Google Maps Javascript API v3 is strongly oriented around Web Mercator.

Let’s not be too hard on Google Maps. It’s a round planet. Your monitor is flat. The math dictates that something has to give here.  Suffice to say that the eternal battle of Conformal projections against Equal Area projections may seem a bit academic to people who just wanna know how to get to Denny’s.

If you love data, but you hate Mercator projections like I do, then may I suggest that these problems may require you to consider tools outside of Google Map? Because I just did. Technologies such as D3.js, Open Layers, and GIS software may need to be employed. But then you can be the coolest kid at the party.

Comment

The Art of Remote Communication: “Say that again?” “Garble goble jibble plurb. ”

 

The Scenario

Maybe you have had a similar dream that goes something like this… You have a very important meeting ahead of you. You are about to meet a brand new client and you’ve practiced your pitch to a tee. Your mind has been racing for hours if not days and your meeting objectives have been looping in your head. Naturally you want to do great and you want the meeting to be a success. A lot is at stake and you don’t want to mess up. Instead it turns out more like this…

You walk into the room of executives, you shake hands with them but none of them make eye contact with you and some even ignore you. Some don’t even notice you at all and others stare at you uncomfortably. You look for your teammates for help and assurance but they don’t notice you at all either. Next, the meeting starts and it’s your turn to speak. You move your mouth but no words come out. You become more and more uncomfortable, you start to sweat and even panic. You’ve missed your chance to speak and the meeting moves on. You didn’t get to contribute and even worse you fear that you’ve made an awful impression on the client and your peers. You did. You wake up panicked and infuriated. Thankfully it was just a dream.

Only this is not a dream but a reality of what a typical remote meeting can be like for you. You walk in into the meeting with a focus on the agenda only to find that you get taken off course by technical and engagement difficulties. Common occurrences like video disruptions, microphone echoes, and phone static all come in the way of effective and natural conversation. This can make remote participants feel invisible and awkward at best. It’s easy to feel disconnected and left out without the physical presence of the room and the people in it.

Unfortunately that is the reality more often than not. If your job is for the most part not task-execution focused in nature you will have to make up for the lack of rich human interaction that happens naturally between people who share a physical location. Digital tools such as video conferencing, IMs and constant emailing filter the rich human qualities that are critical in fields like UX, where the success and quality of a project is driven on collaboration and frequent brainstorming.

Know the Limitations of Digital Communication Tools

Armed with talented WFH (work from home) staff and distributed offices we – at The Nerdery –  work remotely a lotH and here are a few takeaways from our experiences. We give technologies (eg. Google hangout, Skype, jabber) too much credit. In theory you would expect to hop on Google Hangout or Skype and to have a near flawless, normal human interaction. That is an ideal user expectation which at this point in time – from our experience – is still a theory. We like to believe that currently-used remote communication methods are as reliable as we expect them to be, but be prepared to deal with common issues. Video interruptions, audio cutting out, microphone feedback and echoing, and loss of visual are all the little things that will ultimately throw you off and limit your effectiveness.

An unavoidable side effect of remote meetings (formal and informal) is miscommunication, which ranges anywhere from mishearing or not hearing at all what is said, to not seeing people in the room causing inability to pick up nonverbal body language. Some people are very verbal whereas others understand the world more by observing and perceiving their environment. For those people, collaborating and running meetings remotely is excruciatingly painful as the essence of the environment is filtered through the digital tool. It becomes essential to have a game plan when working remotely.

Implement a Strategy to Increase Efficiency and Effectiveness

Anticipating issues and implementing strategies is winning half the battle when it comes to remote communication. The first rule is to set expectations early on with both your team and/or the client. If there is not an established standard at your company when it comes to remote interactions, create your own set of expectations based on what has worked for you and share it with your team. The list should include essentials of what you need to be effective in order for the project to be a success. This can include meeting five minutes before a meeting to ensure all tech is working properly and to go over the agenda.

Another example would be to call in after a client meeting to debrief as a lot of conversations happen “offline” and if you are working remotely it is critical that you create opportunities to be part of informal post meetings. What works is to schedule an internal, post-meeting 5-10 minute debrief. There is still no effective way to make up for the spontaneous conversations that happen naturally, so to ensure that you are not disconnected from what is going on, ask the project manager or lead for recaps or status updates that are made available first thing in the AM and or PM. Depending on the project those may happen once daily or a few times per week but it is essential that they do.

In Summary

Remote clients, distributed teams, and remote offices are here to stay. Communicating remotely at this point in time is not perfect but it is essential and should be embraced.

Hopefully in the near future technology will advance enough to allow remote participants to have a more rich remote presence that will not strip us of our human essence. In the meantime it is essential for any company to have standards in place and an etiquette that strives to have better remote communication best practices. It is imperative to have a set of ground rules to work from that will make up for the lack of rich communication between distributed (non-collocated, remote) teams.

Stay tuned for a series of follow up posts where practical remote communication techniques and tools will be presented in more detail.

Comment

Filed under Articles, The UX Files

True Romance at the Overnight Website Challenge

You can’t make up a how-we-met story quite like the romantic Web Challenge tale of Angie Sheldon and Reed Enger – a happy couple who returned to the Web Challenge last weekend as guests of honor to relive their memories of meeting-cute three years ago at our pre-Challenge needy-meets-nerdy speed-dating mixer – and getting paired-up from thereon.

 

Comment

For security’s sake update WordPress to version 3.8.2

On April 8, 2014 WordPress released a security update to version 3.8.2. The announcement that accompanied the release states “this is an important security release for all previous versions and we strongly encourage you to update your sites immediately.”

WP 3.8.2 addresses two potentially serious security vulnerabilities, includes three security hardening changes, and addresses nine “other bugs.” Most notably the following security issues are addressed:

  • Potential authentication cookie forgery. CVE-2014-0166. (Very serious vulnerability!)
  • Privilege escalation: prevent contributors from publishing posts. CVE-2014-0165.

  • Pass along additional information when processing pingbacks to help hosts identify potentially abusive requests.

  • Fix a low-impact SQL injection by trusted users.

  • Prevent possible cross-domain scripting through Plupload, the third-party library WordPress uses for uploading files.

Additionally: JetPack – the wordpress.com feature-rich plugin suite – was updated to version 2.9.3 to address similar issues.

If your site is currently operating a WordPress version below 3.8.2 or Jetpack version below 2.9.3, you may be at risk and should consider upgrading as soon as possible. 

2 Comments

Filed under Tech News, Technology

The Evolving Technology of Social Media

This webinar explores the technology behind the tools businesses and community managers are integrating into their software platforms. This is not a discussion about which key words resonate best with an audience or optimal word counts. Nerdery developers and social media integration specialists Thomas McMahon and Doug Linsmeyer  describe the software options typically leveraged on social media software integrations. Our audience gave feedback that anybody could follow this conversation, regardless of their technical level. Social media consultants, account managers, and anybody seeking an understanding of the tools and technology going into today’s social media integrations will find this discussion useful.

Slide Deck: To view the slide deck you can visit our Slideshare page.

Bonus Q&A Podcast: (running time 9:22)

Our panel of experts follow-up with three of the  interesting questions from our live audience that we didn’t have time to address during the webinar:

  • Are location-aware tools like FourSquare still worth considering?
  • Possible technology solutions to promote a mobile business.
  • The different social platforms and how they differentiate, and more.
Play

Comment

Heartbleed bug security alert: Your web server/data may be vulnerable – test your domains

On Monday evening, a security firm announced a new vulnerability in a key internet technology that can result in the disclosure of user passwords. This vulnerability is widespread and affects more than two-thirds of the web servers on the planet including top-tier sites like Yahoo and Amazon. If you have a secure (https) website hosted on a Linux/Unix servers using Apache or Nginx or any other service using OpenSSL, you are likely vulnerable.

For a detailed breakdown of this vulnerability, please see this site. This security vulnerability may affect up to two-thirds of all web servers. We urge you to assess your vulnerability immediately, and reach out for help.

How can I get help to fix this problem?

How can I see if my servers are vulnerable?

You can use this site to test your domains for the vulnerability. Enter the domain of your HTTPS web site. If you get a red positive result, you are vulnerable.

In addition, you can execute the following command on your servers to see if they are running a vulnerable version of OpenSSL: openssl version -a

If the version returned is 1.0.1, and its build date is before April 7th, 2014, you are vulnerable.

How can I fix it if I am vulnerable?

You will need to obtain a patched version of OpenSSL and install it on all vulnerable servers. Updated packages should be available for Debian, RedHat, Ubuntu, and CentOS via their package managers. If a package is not available for your platform, you can recompile the OpenSSL package (version 1.0.1g) with the NO_HEARTBEAT flag, which will disable this vulnerability. After updating, restart any services that are using SSL and re-test your domain using the link above (http://filippo.io/Heartbleed/).

For information on your specific Linux distribution see:

Additionally, you should strongly consider changing passwords and/or resetting SSL certificates, but only after OpenSSL has been updated.

What is the vulnerability?

With the vulnerability, called Heartbleed, attackers can obtain sensitive information from servers running certain versions of OpenSSL. Examples of sensitive information include private encryption keys for SSL certificates, usernames/passwords, SSH private keys on those servers and more. Attackers which obtain the keys to your SSL certificates can then set up a man-in-the-middle attack between you and your customers and obtain secure information, such as credit card numbers and authentication credentials. The vulnerability was publicly disclosed Monday, 4/7/2014.

If you have any questions, please contact us, or ping your own go-to Nerdery contact right away. We’ll help analyze your risk and protect your data. If The Nerdery can be a resource to you in any way, we will.

Comment

Filed under Tech News, Technology

Dashicons Make Your WordPress Dashboard Cooler

What Are Dashicons

On December 1, 2013, WordPress 3.8 – code name “Parker” – was released. One of the highlights of 3.8 was the re-skin of the WordPress admin, officially called the Dashboard. The re-skin got rid of the blue, grey, and gradient interface for a more modern and flat design. Included in the update were Dashicons, an icon font.

An icon font has all the benefits of being text and none of the downsides of being an image. Size, color, and pretty much anything else you can do with CSS can be applied to icon fonts.

There are several ways to use Dashicons inside the Dashboard. For this example I’ll be using a plugin, but you don’t have to. If you’re more of a functions.php person, it will work there too. I’ll also be skipping over the details of customizing WordPress and focus on Dashicon-specific code.

Set Up Base Plugin

I already have the plugin created, uploaded, and activated. You can see the final plugin on Github. For each section, I’ll only be highlighting the relevant code. The code could be included in your own plugin or the functions.php file by using the appropriate hooks and filters.

Read more

Comment

Filed under Technology