Sass Compilation

You’ll likely want to use some form of CSS preprocessor in your workflow. In this lesson, I’ll demonstrate how to compile Sass, as part of your Webpack build. It’s easy!
Source: Laracasts

Laracon Online Schedule Announced

Laracon Online has just announced the event schedule for the conference that will be held on March 8th. All times are in EST:

  • 8:00am – Mingle in Slack
  • 8:45am – Opening remarks
  • 9:00am – Jeffrey Way
  • 10:00am – Evan You
  • 11:00am – Break & Mingle in Slack
  • 11:15am – Rachel Andrew
  • 12:15pm – Adam Wathan
  • 1:15pm – Break & Mingle in Slack
  • 1:30pm – Taylor Otwell
  • 2:30pm – Nick Canzoneri
  • 3:30pm – Break & Mingle in Slack
  • 3:45pm – Jason McCreary
  • 4:45pm – Matt Stauffer
  • 5:45pm – Closing remarks
  • 6:00pm – Mingle in Slack

With almost 3,500 tickets sold this will be the biggest Laracon ever, and you can still purchase your tickets. Even if you can’t make the event with a ticket purchase you’ll get the videos after the conference ends.

Source: Laravel News

Laravel Include When Directive

A new feature in Laravel Blade is an includeWhen directive. This allows you to simplify a typical if statement into a single line.

“I think it’s a cool feature as it tidies up so much boilerplate”, said James Brooks, the author of the pull request.

To see this feature in use pretend you have this common setup:

@if(Auth::user())
    @include('nav.user')
@endif

Now, this can be simplified using includeWhen:

@includeWhen(Auth::user(), 'nav.user')

Or as another example for those using Laravel’s Authorization system:

@if ($user->ownsPost($post)
     @include('posts.edit-controls', ['post' => $post])
@endif

Can be changed to the following:

@includeWhen($user->ownsPost($post), 'posts.edit-controls', ['post' => $post])

This feature is now included in Laravel and you can run a composer update to be sure you are on the latest release. For more Blade features check out the Laravel Blade category here on Laravel News.

Source: Laravel News

Minification and Environments

In this episode, we’ll learn how to minify our JavaScript with Webpack and Uglify. However, in the process, we’ll also need to review environments. Often, you’ll want to use one set of configuration for development, and another set for production. I’ll show you how.
Source: Laracasts

Laravel Forge API

Laravel Forge just announced it’s first official API that allows you to create and interact with your servers and sites.

This was a highly requested feature and the API includes support for all the features Forge provides including Servers, Services, Daemons, Firewall Rules, Sites, SSL, and more.

To get started login to your Forge account and from your account, you can generate your API token:

After you have your token, you can then start making requests with the bearer token:

Authorization: Bearer API_TOKEN_HERE

If you are using Guzzle here is an example:

$client->request('GET', '/api/v1/servers', [
    'headers' => ['Authorization' => 'Bearer '.$forgeApiToken]
]);

Check out the Forge API documentation for all the details.

Source: Laravel News

Scrape Screens with zend-dom

Even in this day-and-age of readily available APIs and RSS/Atom feeds, many
sites offer none of them. How do you get at the data in those cases? Through the
ancient internet art of screen scraping.

The problem then becomes: how do you get at the data you need in a pile of HTML
soup? You could use regular expressions or any of the various string functions
in PHP. All of these are easily subject to error, though, and often require some
convoluted code to get at the data of interest.

Alternately, you could treat the HTML as XML, and use the DOM
extension
, which is typically built-in to PHP. Doing so,
however, requires more than a passing familiarity with
XPath, which is something of a black art.

If you use JavaScript libraries or write CSS fairly often, you may be familiar
with CSS selectors, which allow you to target either specific nodes or groups of
nodes within an HTML document. These are generally rather intuitive:

jQuery('section.slide h2').each(function (node) {
  alert(node.textContent);
});

What if you could do that with PHP?

Introducing zend-dom

zend-dom provides CSS selector
capabilities for PHP, via the ZendDomQuery class, including:

  • element types (h2, span, etc.)
  • class attributes (.error, .next, etc.)
  • element identifiers (#nav, #main, etc.)
  • arbitrary element attributes (div[onclick="foo"]), including word matches
    (div[role~="navigation"]) and substring matches (div[role*="complement"])
  • descendents (div .foo span)

While it does not implement the full spectrum of CSS selectors, it does provide
enough to generally allow you to get at the information you need within a page.

Example: retrieving a navigation list

As an example, let’s fetch the navigation list from the ZendDomQuery
documentation page itself:

use ZendDomQuery;

$html = file_get_contents('https://docs.zendframework.com/zend-dom/query/');
$query = new Query($html);
$results = $query->execute('ul.bs-sidenav li a');

printf("Received %d results:n", count($results));
foreach ($results as $result) {
    printf("- [%s](%s)n", $result->getAttribute('href'), $result->textContent);
}

The above queries for ul.bs-sidenav li a — in other words, all links
within list items of the sidenav unordered list.

When you execute() a query, you are returned a ZendDomNodeList instance,
which decorates a DOMNodeList in order to
provide features such as Countable, and access to the original query and
document. In the example above, we count() the results, and then loop over them.

Each item in the list is a DOMNode, giving you
access to any attributes, the text content, and any child elements. In our
case, we access the href attribute (the link target), and report the text
content (the link text).

The results are:

Received 3 results:
- [#querying-html-and-xml-documents](Querying HTML and XML Documents)
- [#theory-of-operation](Theory of Operation)
- [#methods-available](Methods Available)

Other uses

Another use case is for testing. When you have classes that return HTML, or if
you want to execute requests and test the generated output, you often don’t want
to test exact contents, but rather look for specific data or fragments within
the document.

We provide these capabilities for zend-mvc
applications via the zend-test component,
which provides a number of CSS selector assertions
for use in querying the content returned in your MVC responses. Having these
capabilities allows testing for dynamic content as well as static content,
providing a number of vectors for ensuring application quality.

Start scraping!

While this post was rather brief, we hope you can appreciate the powerful
capabilities of this component! We have used this functionality in a variety of
ways, from testing applications to creating feeds based on content differences
in web pages, to finding and retrieving image URIs from pages.

Get more information from the zend-dom documentation.

Source: Zend feed

ES2015 Compilation With Babel

Now that you’re a bit more comfortable with the concept of loaders, let’s figure out how to write our JavaScript in ES2015, and then apply a babel-loader to compile everything down to vanilla JavaScript that any browser can understand.
Source: Laracasts

Loaders Are Transformers

Let’s move on and review Webpack loaders. Loaders allow us to transform and preprocess any number of file types. Maybe you’d like to require Sass files, or compile ES2015 with Babel, or even inject CSS into the browser’s head tag. Let’s review the process of requiring CSS in this episode.
Source: Laracasts

No Really, It’s Okay to Be Unavailable

We live in an uber-connected world. I can’t tell you how many times I’ve complained about shoddy Wi-Fi or the fact that Twitter is down. As much as I love the connection provided by the web and various technologies, somewhere along the way we became terrified of being unavailable. We have to check email maniacally, so we don’t keep our senders waiting. Our cell phone alerts and Slack notifications practically dictate our schedules.

Do a quick Google search of “being unavailable, ” and you’ll get a slew of articles on emotional availability and tips on how to be more attractive in relationships. These articles bemoan to women how men don’t like their partners to be around all the time; they find their better halves more attractive when they are invested in their own lives and not constantly dependent on their man’s company.

As a woman, I find many of these articles completely ridiculous. I believe one should live their lives on their own terms and things will just work themselves out. There is, however, a positive correlation between being somewhat unavailable, especially in business. People fear that by putting away their electronics, the unavailability will reflect poorly on who they are as a worker.

Here’s a truth: being unavailable is good for both you and your company. Somewhere along the way—and I’ve spoken with disdain about this inane practice before—working a million hours a week became a status symbol. Being available all the time meant you worked hard and smart. Although it has been vehemently debunked time and time again, there are those out there that still believe they have to respond to emails while they’re showering to keep a leg up.

You don’t need to unplug completely to be unavailable. Many of the articles written about disconnecting are coming from people who don’t work closely with technology. They aren’t active on social media, and they don’t depend on Slack and other ChatOps to get their jobs done. Aside from the business aspect of being connected, a lot of us derive joy from hanging out on Twitter and consuming content.

In my personal experience, my iPhone is an extension of my day, not a complete productivity assassin. With that said, it has been proven that using a phone while performing another task leads to poorer performance. Studies have shown even just hearing alerts in the background is comparable to actively using the device. Is the solution to turn off the phone once you get into the house? Not for me.

We can all have our own definition of unplugging. For me, it’s turning off alerts now and then to focus on other things. In the case of my computer, it’s occasionally not bringing it with me to the couch when I’m watching TV. Unplugging is different for everybody. Remember you don’t have to go completely “off the grid” to glean the benefits of time away from the internet.

It’s great to unplug while still engaging in some tech. Don’t forget, however, to take some time away from the grind. Unplugging could just be not answering work emails or going on airplane mode. Whatever you choose, remember it is indeed okay to occasionally be unavailable.

Source: Laravel News

Cloudflare has been leaking custom HTTPS sessions

Cloudflare has reported that under certain circumstances their edge servers were running past the end of a buffer and returning memory that contained private information such as HTTP cookies, authentication tokens, HTTP POST bodies, and other sensitive data. And some of that data had been cached by search engines.

Tavis Ormandy from Google’s Project Zero first spotted the problem and reported this in the Chromium tracker:

It looked like that if an html page hosted behind cloudflare had a specific combination of unbalanced tags, the proxy would intersperse pages of uninitialized memory into the output (kinda like heartbleed, but cloudflare specific and worse for reasons I’ll explain later). My working theory was that this was related to their “ScrapeShield” feature which parses and obfuscates html – but because reverse proxies are shared between customers, it would affect all Cloudflare customers.

We fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major cloudflare-hosted sites from other users. Once we understood what we were seeing and the implications, we immediately stopped and contacted cloudflare security.

Once Cloudflare was contacted they immediately jumped on the problem and reduced it down to their HTML parser that was used in email obfuscation, Server-side Excludes, and Automatic HTTPS Rewrites.

The issue is now fixed and rolled out to all customers and Cloudflare has a very detailed report on the background, the timeline of the fix, and how they fixed it. They also, worked with Google and other search engines directly to get to remove any cached HTTP responses.

If you’d like to see a list of sites running Cloudflare here is a list of some of the most popular domains.

UPDATE: Laravel News runs on Cloudflare and we received an email this morning saying this site was not affected. Here is a copy of the relevant information from that report:

In our review of these third party caches, we discovered exposed data on approximately 150 of Cloudflare’s customers across our Free, Pro, Business, and Enterprise plans. We have reached out to these customers directly to provide them with a copy of the data that was exposed, help them understand its impact, and help them mitigate that impact.

Your domain is not one of the domains where we have discovered exposed data in any third party caches. The bug has been patched so it is no longer leaking data. However, we continue to work with these caches to review their records and help them purge any exposed data we find. If we discover any data leaked about your domains during this search, we will reach out to you directly and provide you full details of what we have found.

Source: Laravel News

1 12 13 14 15 16 62